Google Brain is reportedly developing an AI (artificial intelligence) software that can build more AIs. Then in May this year, Google Brain’s researchers created the AutoML, a machine learning algorithm that is capable of generating its own AIs, thereby eliminating the need to hire human experts.
Recently, the Google Brain’s team decided to throw a challenge to the AI AutoML of creating a ‘child’ that outdid all of its human-made counterparts by using an approach called reinforcement learning. AutoML acts as a controller neural network that can create a “child” network to execute a specific task.
Called as NASNet, the child AI was given the task to recognize objects in a real-time video feed, like people, cars, traffic lights, handbags, or backpacks. The “child” model trains for the task and gets evaluated by AutoML’s controller neural net, which learns from the feedback and enhances the child model until it gets a superior version of NASNet.
After tweaking and improving the NASNet endlessly, it was tested on the ImageNet image classification and COCO object detection datasets – both known for being “two of the most respected large-scale academic data sets in computer vision.” According to Google, NASNet outperformed all other computer vision systems, reports Futurism.
While there are many possible uses of AutoML and NASNet, there are also ethical issues related to AI. For instance, what if AutoML creates AI systems at such a speed that the society simply cannot keep up with them or what if the AI parent passes down unwanted biases to its child.
To keep all these things in human control, it is very important to implement more strict regulations and enhanced ethical standards to prevent the use of AI for malicious purposes.