Karspersky announced its investment in Motive NT, which developed the “Altai” neuroprocessor. Let's take a look at what neuroprocesso...
Karspersky announced its investment in Motive NT, which developed the “Altai” neuroprocessor. Let's take a look at what neuroprocessors are, their differences from traditional processors, and why this field promises a lot in terms of developing computer technologies.
computer brain
All modern computers, tablets, smartphones, networking devices, and digital players have a central processing unit (CPU), a general-purpose electronic circuit device for executing computer code. Interestingly, traditional processors, whose operating principles were introduced in the 1940s , have not changed much since then. The CPU reads the instructions, then executes them. All programs are broken down into the simplest tasks at the CPU level. These are commands such as “read from memory”, “write to memory”, “add two numbers”, “multiply”, “divide”. Although there are many nuances about the way the CPU works, the important point for the topic we will discuss today is that CPUs have been able to perform only a single operation in a cycle for many years. Of course, these cycles could be very numerous. At first, hundreds of thousands, then millions, and today billions of cycles can be performed per second. But until recently (the mid-2000s) the typical home computer or laptop had only a single processor.
The ability to execute multiple programs simultaneously on a single CPU was achieved by resource allocation: A single program was given multiple clock cycles, then resources were allocated to another, then allocated to a third, and so on. When affordable multi-core processors came onto the market, resources began to be distributed more efficiently. It was now possible not only to execute different programs on different cores, but also to execute a single program simultaneously on different cores. At first this was not an easy task. For a while, many programs and games were not optimized for multi-core or multi-processor systems.
Today, the CPUs available to home users can have 16 or even 32 cores. While these are impressive numbers, they are far from the maximum possible even for traditional consumer technology. For example, the Nvidia GeForce 3080Ti video card has 10,240 cores! So why does this huge difference arise? Because traditional CPUs are much more complex than the processor cores in video cards. While ordinary CPUs perform a limited set of simple functions, dedicated graphics processing units (GPUs) in video cards are even more primitive; They can only perform the simplest operations, but they do so very quickly. This comes in handy when you want to perform billions of simple operations per second. For example, in a computer game, to calculate the lighting of a scene, say, many relatively simple calculations must be carried out for each point in the image.
Despite these nuances, the processor cores used in conventional CPUs and video cards are fundamentally not much different from each other. However, neuromorphic processors are very different from both CPUs and GPUs. They do not attempt to implement a set of elements to perform arithmetic operations consecutively or in parallel. Instead, they aim to reproduce the structure of the human brain!
The smallest building block in computing is considered the low transistor. A typical CPU in any computer or smartphone contains several billion of these microscopic elements. In the human brain, the basic element that can correspond to this is neurons , that is, nerve cells. Neurons are connected to each other by synapses . Tens of billions of neurons make up the human brain, which is a very complex self-learning system. The discipline known as neuromorphic engineering has for decades focused on reproducing the structure of the human brain, at least partially, in the form of electronic circuits. Developed with this approach, the Altai processor is the hardware implementation of the brain tissue with all its neurons and synapses.
Neuroprocessors and neural networks
But let's not get carried away just yet. Although researchers have managed to reproduce certain elements of the brain structure using semiconductors, this does not mean that we will see digital copies of humans anytime soon. Although this type of work is a goal that these studies eventually aim to achieve, it is a much more complex process. Meanwhile, neuroprocessors, that is, semiconductor copies of our brain structure, have more practical application areas. This technology is needed to implement machine learning systems and the neural networks that support them.
A neural network, or rather an artificial neural network (not the natural one inside our heads), consists of a set of cells capable of processing and storing information. The classic neural network model Perceptron was developed in the 1960s. This array of cells can be compared to a camera matrix, but it also has the ability to learn, interpret the resulting image and find patterns within it. Special connections between cells and different types of cells process information so that, for example, they can distinguish between alphabet cards held in front of the lens. But that was 60 years ago. Since then, especially in the last decade, machine learning and neural networks have become widely used for many mundane tasks.
The problem of recognizing letters of the alphabet was solved a long time ago. As drivers know very well, speed cameras can now recognize vehicle license plates from any angle, day or night, even when covered in mud. One of the typical tasks of neural networks is to take a photo (for example, a photo of a stadium from above) and determine the number of people in it. These tasks have one thing in common: The inputs always differ slightly. The old model could probably read a license plate photo taken from a straight-on angle with a regular program, but it couldn't recognize the license plate from photos taken at an angle. In this case, to train a neural network, we feed it lots of photos of a license plate (or something else) so that it learns to distinguish the letters and numbers that make up the license plate (or other characteristics of what is being entered). Sometimes he specializes so much that he can make a diagnosis better or faster than a doctor, for example, in the medical field.
Let's get back to the implementation of neural networks. Although the calculations required to implement a neural network algorithm are quite simple, they involve a large number of operations. This job is better suited not to a traditional CPU, but to a video card with thousands or tens of thousands of compute modules. It is also possible to make a more specialized CPU that performs only the set of calculations required for a particular learning algorithm. This may be a little cheaper and slightly more effective. However, all these devices still build the neural network (that is, sets of cell nodes connected to each other by multiple links that send and receive information) at the software level; The neuroprocessor implements the neural network scheme at the hardware level.
This hardware implementation is much more efficient. Intel's Loihi neuroprocessor consists of 131,072 artificial neurons; These artificial neurons are connected by a much larger number of synapses (more than 130 million). One of the advantages of this scheme is that it consumes low energy when not in operation; Traditional GPUs consume a lot of energy even when not running. This and the theoretically higher performance of neural network training performances provide much lower energy consumption. For example, the first-generation Altai processor consumes a thousand times less energy than a similar GPU implementation.
Neural networks and security
130,000 neurons are very few compared to the tens of billions of neurons in the human brain. Research that will bring us closer to fully understanding how the brain works and creating much more effective self-learning systems is just beginning. More importantly, the demand for neuroprocessors has already increased, as they will, in theory, enable us to solve existing problems more effectively. A pattern recognizer integrated into your smartphone can help distinguish fruits from each other, for example, when you are picking fruit in the forest. CPUs specialized for video processing and similar tasks are already widely embedded in our smartphones. Neuroprocessors take the idea of machine learning several steps further, offering a more effective solution.
Why is Kaspersky interested in this area? First, our products already actively use neural networks and machine learning technologies in general . Examples include technologies for processing large amounts of information about the operation of a corporate network or monitoring data that nodes share with each other or with the outside world. Machine learning technologies enable us to detect anomalies and find unusual activity in this traffic flow, which may be the result of an intrusion or malicious actions of an insider. Secondly, Kaspersky is developing its own operating system, KasperskyOS , which guarantees the safe execution of tasks assigned to devices under its control . Integrating hardware neural networks into KasperskyOS-based devices promises a lot for the future.
The end result of all these advances will be a true Artificial Intelligence that not only performs tasks for us, but also determines (and solves) its own tasks. This will raise ethical issues, and some will, of course, find it difficult to comprehend how an obedient machine could be more intelligent than its creator. There's still plenty of time for that, though. About five years ago, everyone was sure that the final preparations for driverless cars were being made and that they would be launched on the market today or tomorrow. Such systems are also closely linked to machine learning. In 2022, the opportunities and problems in this field are still at odds. Even a narrow task such as driving, which humans can do extremely well, cannot yet be completely entrusted to a robot. Therefore, developments in this field, both at the software and idea level and at the hardware level, are of great importance. Although the combination of all these will not yet lead to the emergence of smart robots like in science fiction books and movies, it will definitely make our lives a little easier and safer.
No comments