Artificial Intelligence: Google’s Teachable Machine website is an initiative by the company to teach the user the concepts of artificial intelligence in a fun and straightforward way.
The platform offers space for configuring movements recognized by the camera that result in predetermined responses.
In this way, it is possible to understand how machine learning — a principle applied in creating neural networks — works: the computer is trained to recognize specific behavior patterns (movement in the video) to perform a particular task. It is worth mentioning that it is necessary to use a webcam to do the activities.
How Does It Work?
Using your computer’s camera, you can set a specific hand gesture or your face to display a particular GIF, sound, or corresponding voice message. The idea is to show the internet user the level of precision and ease of machine learning behind the site, capable of quickly identifying the gestures you make in front of the camera.
Although the site does not have a Portuguese version, how it works is simple: you need to activate your computer’s camera. Then, in the “Learning” block, choose one of the three actions and hold the respective button while gesturing to the camera. For the machine to learn that, when waving, it should show a GIF of a cat saying goodbye to you, you need to “record” this action using your camera while holding the related button.
Then you can make another gesture and relate it to another GIF or even a sound. By switching pre-configured moves, you can test the system’s ability to recognize precisely what you are doing. Make the gesture, and the site responds with a GIF, sound or message.
The floor plans of chips created by AI are at the same level and, more often than not, at a higher level than those made by humans on all benchmarks such as power consumption, performance, computational efficiency and build quality.
These features could help scientists delay the end of the famous Moore’s Law, which predicts that the number of transistors on a microchip tends to double every two years. According to experts, the miniaturization of electronic components should reach the physical production limits in a few years.
The task for Google’s algorithms is called “basic planning” and is restricted to creating optimal layouts within a silicon matrix to support the subsystems of a chip. These components are CPUs, GPUs and memory cores connected.
Where each component is placed on the die makes all the difference and affects the speed and efficiency of the chip. These nanometer positioning changes have considerable effects and require highly specialized work during the manufacturing process.
Google engineers trained the machine learning algorithm on a dataset of 10,000-floor plans of chips of varying qualities, created randomly by human experts according to specific manufacturing standards.
Using a “reward” function that measured success by identifying metrics such as wire length and energy usage, the AI was able to identify good and bad floor plans. Google’s artificial intelligence could generate its floor plans surprisingly quickly with this computed data.
Google engineers have accomplished a fantastic feat. They trained artificial intelligence (AI) to design chips more efficiently and faster than humans. A job that takes six months to complete by flesh-and-blood technicians can be done in just six hours by the machine learning algorithm.
Search Giant intends to use the research results to create the next generation of its Tensor Processing Unit (TPU) chips, optimized to handle massive workloads involving artificial intelligence to perform complex processing tasks.
“In other words, AI is helping to accelerate the future of artificial intelligence development. This should allow companies to more quickly explore potential architecture space for future projects and more easily customize chips for specific workloads,” says Google researcher.