Google details Ethical standards for AI and its use cases


Google AI

Google recently rebranded ‘Google Research’ to Google A.I to convey how serious the company is about the artificial intelligence. Today Google’s CEO Sundar Pichai has laid down the company’s new AI manifesto, and it is Google’s answer to the concerning rise of AI fear. Google has seven “principles” that will guide as it progresses and these are the real guidelines the company believes it can use for both engineering and business decisions.

The seven principles include; be socially beneficial, avoid creating or reinforcing unfair bias, be built and tested for safety, be accountable to people, incorporate privacy design principles, uphold high standards of scientific excellence, be made available for uses that accord with these principles, based on primary purpose and use, is it unique?, will it have a significant impact?, What will Google’s involvement be?.

Google in a blog post has acknowledged the rise of AI and said that the area is dynamic and evolving, and it will approach with humility, a commitment to internal and external engagement, and a willingness to adapt its approach in the days to come.

The company has also detailed that it will not accept AI application if it falls in areas that include; technologies that cause or are likely to cause overall harm, weapons or other technologies, technologies that gather or use information for surveillance, technologies whose purpose contravenes widely accepted principles of international law and human rights. Following the controversy, Google said that it is not developing AI for use in weapons, but will continue to work with governments and the military in many other areas which include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.

Source