Pull to refresh

Admin

Show first
Rating limit
Level of difficulty

Speech Analytics: Benefits and its New Importance in Telecommunication Technology

Reading time 3 min
Views 1.2K

Speech analytics is the process of analysing recorded speech, such as phone calls, to gather customer information to improve communication and future customer interaction. Speech analytics as a technology has been evolving especially rapidly over the last few years. It gives the ability to structure and analyse previously lost streams of insight-rich data, such as phone conversations. Empowered with this technology, operations can gather incredibly valuable business intelligence to drive call delivery performance improvements. It’s smart in that it automatically identifies focus areas in which customer service or sales teams may need additional call training which then, in turn, improves the call’s successful outcome. Speech analytics, as a process, can isolate buzzwords and phrases used most frequently within a given time period, plus indicate usage is trending up or down. This data is highly useful to call managers to spot changes in consumer behaviour so that action can be taken to improve customer satisfaction.

Zadarma is a leading global VoIP provider and offers a smart speech analytics feature as part of their incredibly easy to use telecommunications offering. The tool is free as part of the wider PBX phone system bundles, included in the free recognition minutes. Zadarma’s analytics feature allows data access to every internal or external call conversation. The benefits of speech analytics include:

Read more
Total votes 3: ↑3 and ↓0 +3
Comments 0

Coins classifier Neural Network: Head or Tail?

Reading time 14 min
Views 1.3K

Home of this article: https://robotics.snowcron.com/coins/02_head_or_tail.htm

The global objective of these articles is to build a coin classifier, capable of scanning your pocket change and find rare / valuable coins. This is a second article in a series, so let me remind you what happened earlier (https://habr.com/ru/post/538958/).

During previous step we got a rather large dataset composed of pairs of images, loaded from an online coins site meshok.ru. Those images were uploaded to the Internet by people we do not know, and though they are supposed to contain coin's head in one image and tail in the other, we can not rule out a situation when we have two heads and no tail and vice versa. Also at the moment we have no idea which image contains head and which contains tail: this might be important when we feed data to our final classifier.

So let's write a program to distinguish heads from tails. It is a rather simple task, involving a convolutional neural network that is using transfer learning.

Same way as before, we are going to use Google Colab environment, taking the advantage of a free video card they grant us an access to. We will store data on a Google Drive, so first thing we need is to allow Colab to access the Drive:

Читать далее
Rating 0
Comments 0

Implementing Offline traceroute Tool Using Python

Reading time 30 min
Views 3.9K

Hey everyone! This post was born from a question asked by an IT forum member. The summary of the question looked as follows:


  • There is a set of text files containing routing tables collected from various network devices.
  • Each file represents one device.
  • Device platforms and routing table formats may vary.
  • It is required to analyze a routing path from any device to an arbitrary subnet or host on-demand.
  • Resulting output should contain a list of routing table entries that are used for the routing to the given destination on each hop.

The one who asked a question worked as a TAC engineer. It is often that they collect or receive from the customers some text 'snapshots' of the network state for further offline analysis while troubleshooting the issues. Some automation could really save a lot of time.


I found this task interesting and also applicable to my own needs, so I decided to write a Proof-of-Concept implementation in Python 3 for Cisco IOS, IOS-XE, and ASA routing table format.


In this article, I’ll try to reconstruct the resulting script development process and my considerations behind each step.


Let’s get started.

Read more →
Rating 0
Comments 0

Prometheus in Action: from default counters to SLO-related queries

Reading time 8 min
Views 7K

All Prometheus metrics are based on time series - streams of timestamped values belonging to the same metric. Each time series is uniquely identified by its metric name and optional key-value pairs called labels. The metric name specifies some characteristics of the measured system, such as http_requests_total - the total number of received HTTP requests. In practice, you often will be interested in some subset of the values of a metric, for example, in the number of requests received by a particular endpoint; and here is where the labels come in handy. We can partition a metric by adding endpoint label and see the statics for a particular endpoint: http_requests_total{endpoint="api/status"}. Every metric has two automatically created labels: job_name and instance. We see their roles in the next section.

Prometheus provides a functional query language called PromQL. The result of the query might be evaluated to one of four types:

Scalar (aka float)

String (currently unused)

Instant Vector - a set of time series that have exactly one value per timestamp.

Range Vector - a set of time series that have a range of values between two timestamps.

At first glance, Instant Vector might look like an array, and Range Vector as a matrix.

If that would be the case, then a Range Vector for a single time series "downgrades" to an Instant Vector. However, that's not the case:

Read more
Rating 0
Comments 2

Distributed Tracing for Microservice Architecture

Reading time 8 min
Views 5K

What is distributed tracing? Distributed tracing is a method used to profile and monitor applications, especially those built using a microservices architecture. Distributed tracing helps pinpoint where failures occur and what causes poor performance.

Let’s have a look at a simple prototype. A user fetches information about a shipment from `logistic` service. logistic service does some computation and fetches the data from a database. logistic service doesn’t know the actual status of the shipment, so it has to fetch the updated status from another service `tracking`. `tracking` service also needs to fetch the data from a database and to do some computation.

In the screenshot below, we see a whole life cycle of the request issued to `logistics` service:

Read more
Rating 0
Comments 0

Coins Classification using Neural Networks

Reading time 19 min
Views 2.8K

See more at robotics.snowcron.comThis is the first article in a serie dedicated to coins classification.Having countless "dogs vs cats" or "find a pedestrian on the street" classifiers all over the Internet, coins classification doesn't look like a difficult task. At first. Unfortunately, it is degree of magnitude harder - a formidable challenge indeed. You can easily tell heads of tails? Great. Can you figure out if the number is 1 mm shifted to the left? See, from classifier's view it is still the same head... while it can make a difference between a common coin priced according to the number on it and a rare one, 1000 times more expensive.Of course, we can do what we usually do in image classification: provide 10,000 sample images... No, wait, we can not. Some types of coins are rare indeed - you need to sort through a BASKET (10 liters) of coins to find one. Easy arithmetics suggests that to get 10000 images of DIFFERENT coins you will need 10,000 baskets of coins to start with. Well, and unlimited time.So it is not that easy.Anyway, we are going to begin with getting large number of images and work from there. We will use Russian coins as an example, as Russia had money reform in 1994 and so the number of coins one can expect to find in the pocket is limited. Unlike USA with its 200 years of monetary history. And yes, we are ONLY going to focus on current coins: the ultimate goal of our work is to write a program for smartphone to classify coins you have received in a grocery store as a change.Which makes things even worse, as we can not count on good lighting and quality cameras anymore. But we'll still try.In addition to "only Russian coins, beginning from 1994", we are going to add an extra limitation: no special occasion coins. Those coins look distinctive, so anyone can figure that this coin is special. We focus on REGULAR coins. Which limits their number severely.Don't take me wrong: if we need to apply the same approach to a full list of coins... it will work. But I got 15 GB of images for that limited set, can you imagine how large the complete set will be?!To get images, I am going to scan one of the largest Russian coins site "meshok.ru".This site allows buyers and sellers to find each other; sellers can upload images... just what we need. Unfortunately, a business-oriented seller can easily upload his 1 rouble image to 1, 2, 5, 10 roubles topics, just to increase the exposure.

So we can not count on the topic name, we have to determine what coin is on the photo ourselves.To scan the site, a simple scanner was written, based on the Python's Beautiful Soup library. In just few hours I got over 50,000 photos. Not a lot by Machine Learning standards, but definitely a start.After we got the images, we have to - unfortunately - revisit them by hand, looking for images we do not want in our training set, or for images that should be edited somehow. For example, someone could have uploaded a photo of his cat. We don't need a cat in our dataset.First, we delete all images, that can not be split to head/hail.

Читать далее
Rating 0
Comments 0

Tarantool: an analyst's view

Reading time 8 min
Views 1.9K
Hi all! I'm Andrey Kapustin. I work as a system analyst at Mail.ru Group. Our products form a unified ecosystem. Many independent infrastructures generate data in it: taxi and food delivery services, email services, social networks, etc. The faster and more precise we can predict a client's needs, the sooner and more correctly we can offer our products. 

Many system analysts and engineers are keen to know: 

  1. How to design the architecture of a trigger platform for real-time marketing?
  2. How to arrange a data structure that would be in line with the requirements of a marketing strategy for interacting with clients?
  3. How to ensure the stable operations of the  system under very heavy workloads? 

Such systems are based on technologies of high-load processing and Big Data analysis. We have accumulated considerable experience in these areas. Our expertise is in high demand on the market.  I'm going to show how we help our customers to switch from off-line to on-line in their interactions with clients using Real-Time Marketing solutions based on Tarantool.
Read more →
Total votes 26: ↑26 and ↓0 +26
Comments 0

Visualizing Network Topologies: Zero to Hero in Two Days

Reading time 49 min
Views 29K

Hey everyone! This is a follow-up article on a local Cisco Russia DevNet Marathon online event I attended in May 2020. It was a series of educational webinars on network automation followed by daily challenges based on the discussed topics.
On a final day, the participants were challenged to automate a topology analysis and visualization of an arbitrary network segment and, optionally, track and visualize the changes.


The task was definitely not trivial and not widely covered in public blog posts. In this article, I would like to break down my own solution that finally took first place and describe the selected toolset and considerations.

Let's get started.


Read more →
Total votes 2: ↑2 and ↓0 +2
Comments 0

Big / Bug Data: Analyzing the Apache Flink Source Code

Reading time 11 min
Views 842
image1.png

Applications used in the field of Big Data process huge amounts of information, and this often happens in real time. Naturally, such applications must be highly reliable so that no error in the code can interfere with data processing. To achieve high reliability, one needs to keep a wary eye on the code quality of projects developed for this area. The PVS-Studio static analyzer is one of the solutions to this problem. Today, the Apache Flink project developed by the Apache Software Foundation, one of the leaders in the Big Data software market, was chosen as a test subject for the analyzer.
Read more →
Total votes 1: ↑0 and ↓1 -1
Comments 0

The Rules for Data Processing Pipeline Builders

Reading time 5 min
Views 3.7K


"Come, let us make bricks, and burn them thoroughly."
– legendary builders

You may have noticed by 2020 that data is eating the world. And whenever any reasonable amount of data needs processing, a complicated multi-stage data processing pipeline will be involved.


At Bumble — the parent company operating Badoo and Bumble apps — we apply hundreds of data transforming steps while processing our data sources: a high volume of user-generated events, production databases and external systems. This all adds up to quite a complex system! And just as with any other engineering system, unless carefully maintained, pipelines tend to turn into a house of cards — failing daily, requiring manual data fixes and constant monitoring.


For this reason, I want to share certain good engineering practises with you, ones that make it possible to build scalable data processing pipelines from composable steps. While some engineers understand such rules intuitively, I had to learn them by doing, making mistakes, fixing, sweating and fixing things again…


So behold! I bring you my favourite Rules for Data Processing Pipeline Builders.

Read more →
Total votes 9: ↑9 and ↓0 +9
Comments 2

Spring Boot app with Apache Kafka in Docker container

Reading time 4 min
Views 29K

Privet, comrads!

In this article i’ll show how easy it is to setup Spring Java app with Kafka message brocker. We will use docker containers for kafka zookeeper/brocker apps and configure plaintext authorization for access from both local and external net.

Link to final project on github can be picked up at the end of the article.

Read more
Total votes 1: ↑1 and ↓0 +1
Comments 1

Patroni cluster (with Zookeeper) in a docker swarm on a local machine

Reading time 20 min
Views 9.1K

There probably is no way one who stores some crucial data (and well, in particular, using SQL databases) can possibly dodge from thoughts of building some kind of safe cluster, distant guardian to protect consistency and availability at all times. Even if the main server with your precious database gets knocked out deadly - the show must go on, right? This basically means the database must still be available and data be up-to-date with the one on the failed server.

As you might have noticed, there are dozens of ways to go and Patroni is just one of them. There is plenty of articles providing a more or less detailed comparison of the options available, so I assume I'm free to skip the part of luring you into Patroni's side. Let's start off from the point where among others you are already leaning towards Patroni and are willing to try that out in a more or less real-case setup.

I am not a DevOps engineer originally so when the need for the high-availability cluster arose and I went on I would catch every single bump on the road. Hope this tutorial will help you out to get the job done with ease! If you don't want any more explanations, jump right in. Otherwise, you might want to read some more notes on the setup I went on with.

Read more
Rating 0
Comments 1

OPPO, Huawei, Xiaomi. Chinese app stores join forces to take on Google

Reading time 2 min
Views 2.7K

Major players in the Chinese app market are joining forces to take on the almighty Google Play store. Xiaomi, Oppo and Vivo are reported to launch the Global Developer Service Alliance (GDSA), a platform allowing Android developers to publish their apps in the partnering stores from one upload.

The GDSA is expected to launch in nine countries—including India, Indonesia, Malaysia, Russia, Spain, Thailand, the Philippines, and Vietnam—although paid app support may vary across the regions. Canalys’ Nicole Peng explains the wide reach of this alliance:

By forming this alliance each company will be looking to leverage the others’ advantages in different regions, with Xiaomi’s strong user base in India, Vivo and Oppo in Southeast Asia, and Huawei in Europe. 

Читать далее
Total votes 2: ↑2 and ↓0 +2
Comments 0

Linux Switchdev the Mellanox way

Reading time 7 min
Views 2.7K
This is a transcription of a talk that was presented at CSNOG 2020 — video is at the end of the page



Greetings! My name is Alexander Zubkov. I work at Qrator Labs, where we protect our customers against DDoS attacks and provide BGP analytics.

We started using Mellanox switches around 2 or 3 years ago. At the time we got acquainted with Switchdev in Linux and today I want to share with you our experience.
Total votes 18: ↑18 and ↓0 +18
Comments 0

5 Thought-Provoking Use Cases Of Blockchain In Diverse Industries

Reading time 2 min
Views 1.2K

Blockchain is a decentralized technology that maintains a record of all transactions occurring over a peer-to-peer network. Due to Blockchain's several different high-level use cases, numerous industries described Blockchain as the 'magic beans.' 

Blockchains store the record in a decentralized system that is interconnected. This technology lessens vulnerability and enhances transparency in all industrial sectors as information is stored digitally, and it does not have any centralized point to carry out the transactions.

Do You Know?

Read more
Total votes 1: ↑0 and ↓1 -1
Comments 1

Agreements as Code: how to refactor IaC and save your sanity?

Reading time 9 min
Views 1.2K


Before we start, I'd like to get on the same page with you. So, could you please answer? How much time will it take to:


  • Create a new environment for testing?
  • Update java & OS in the docker image?
  • Grant access to servers?

There is the spoiler from the TechLeadConf. Unfortunately, it's in Russian


It will take longer than you expect. I will explain why.

Read more →
Total votes 3: ↑3 and ↓0 +3
Comments 0

Mysql 8.x Group Replication (Master-Slave) with Docker Compose

Reading time 5 min
Views 5.4K

This post is handling the following situation - how to setup up simple Mysql services with group replication being dockerized. In our case, we’ll take the latest Mysql (version 8.x.x)

FYI: all mentioned code (worked and tested manually) located here.

I will skip not interested steps like ‘what is Mysql, Docker and why we choose them, etc’. We want to set up possibly trouble proof DB. That’s our plan.

Read more →
Rating 0
Comments 1

InterSystems IRIS – the All-Purpose Universal Platform for Real-Time AI/ML

Reading time 22 min
Views 931
Author: Sergey Lukyanchikov, Sales Engineer at InterSystems

Challenges of real-time AI/ML computations


We will start from the examples that we faced as Data Science practice at InterSystems:

  • A “high-load” customer portal is integrated with an online recommendation system. The plan is to reconfigure promo campaigns at the level of the entire retail network (we will assume that instead of a “flat” promo campaign master there will be used a “segment-tactic” matrix). What will happen to the recommender mechanisms? What will happen to data feeds and updates into the recommender mechanisms (the volume of input data having increased 25000 times)? What will happen to recommendation rule generation setup (the need to reduce 1000 times the recommendation rule filtering threshold due to a thousandfold increase of the volume and “assortment” of the rules generated)?
  • An equipment health monitoring system uses “manual” data sample feeds. Now it is connected to a SCADA system that transmits thousands of process parameter readings each second. What will happen to the monitoring system (will it be able to handle equipment health monitoring on a second-by-second basis)? What will happen once the input data receives a new bloc of several hundreds of columns with data sensor readings recently implemented in the SCADA system (will it be necessary, and for how long, to shut down the monitoring system to integrate the new sensor data in the analysis)?
  • A complex of AI/ML mechanisms (recommendation, monitoring, forecasting) depend on each other’s results. How many man-hours will it take every month to adapt those AI/ML mechanisms’ functioning to changes in the input data? What is the overall “delay” in supporting business decision making by the AI/ML mechanisms (the refresh frequency of supporting information against the feed frequency of new input data)?

Read more →
Rating 0
Comments 0

The 2020 National Internet Segment Reliability Research

Reading time 9 min
Views 9.4K

The National Internet Segment Reliability Research explains how the outage of a single Autonomous System might affect the connectivity of the impacted region with the rest of the world. Most of the time, the most critical AS in the region is the dominant ISP on the market, but not always.

As the number of alternate routes between AS’s increases (and do not forget that the Internet stands for “interconnected network” — and each network is an AS), so does the fault-tolerance and stability of the Internet across the globe. Although some paths are from the beginning more important than others, establishing as many alternate routes as possible is the only viable way to ensure an adequately robust network.

The global connectivity of any given AS, regardless of whether it is an international giant or regional player, depends on the quantity and quality of its path to Tier-1 ISPs.

Usually, Tier-1 implies an international company offering global IP transit service over connections with other Tier-1 providers. Nevertheless, there is no guarantee that such connectivity will be maintained all the time. For many ISPs at all “tiers”, losing connection to just one Tier-1 peer would likely render them unreachable from some parts of the world.
Read more →
Total votes 26: ↑26 and ↓0 +26
Comments 0

The hunt for vulnerability: executing arbitrary code on NVIDIA GeForce NOW virtual machines

Reading time 5 min
Views 6.6K

Introduction


Against the backdrop of the coronavirus pandemic, the demand for cloud gaming services has noticeably increased. These services provide computing power to launch video games and stream gameplay to user devices in real-time. The most obvious advantage of this gaming type is that gamers do not need to have high-end hardware. An inexpensive computer is enough to run the client, spending time in self-isolation while the remote server carries out all calculations.

NVIDIA GeForce NOW is one of these cloud-based game streaming services. According to Google Trends, worldwide search queries for GeForce NOW peaked in February 2020. This correlates with the beginning of quarantine restrictions in many Asian, European, and North and South American countries, as well as other world regions. At the same time in Russia, where the self-isolation regime began in March, we see a similar picture with a corresponding delay.

Given the high interest in GeForce NOW, we decided to explore this service from an information security standpoint.
Read more →
Total votes 6: ↑6 and ↓0 +6
Comments 0