The White House released a much-anticipated document entitled “Preparing for the Future of Artificial Intelligence.” Sent from the Office of the President and the National Science and Technology Council Committee on Technology (or NSTC), the report is 58 pages of research, documentation, and recommendations on how the United States government plans to respond to artificial intelligence (AI) moving forward.
The report was developed by the NSTC’s Subcommittee on Machine Learning and Artificial Intelligence, “which was chartered in May 2016 to foster interagency coordination, to provide technical and policy advice on topics related to AI, and to monitor the development of AI technologies across industry, the research community, and the Federal Government,” according to the report.
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2100454,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"bots,business,","session":"C"}']The NSTC hosted five public workshops, as well as putting out a public Request for Information. The information drawn from those six sources informed the eventual recommendations of the committee. As it says in the report, there’s an “attempt to reach General AI by expanding Narrow AI solutions [that] have made little headway over many decades of research.”
The 23 official recommendations can be boiled down into seven broad mandates, which serve as a good guide for anyone in the field. These seven declarations will have a noticeable impact on the future of technology in the U.S., and everyone in the industry should be familiar with them, in order to take best advantage of the new opportunities they will open (and the doors they may close).
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
1. AI should be used for public good
AI has already begun providing major dividends to the public in fields such as healthcare, transportation, criminal justice, and the economy.
One concrete example is AI-enabled traffic management, which can reduce wait times and unnecessary carbon emissions by as much as 25 percent. In animal welfare and research circles, animal migration tracking is being improved by analyzing photographs that tourists post to social media. In the future, we hope to see vast improvement in the criminal justice system, including in the areas of crime reporting and bail sentencing.
So what are the concrete steps we need to take, moving forward? The government recommends that both private and public institutes invest in research to see how their specific business or industry would benefit from AI. There are also plans to create an open-source AI training database to ensure everyone has access to the technology necessary to embark on this new phase.
2. Government should embrace AI
AI generally makes things faster and more efficient, and every agency should be on board. DARPA has an educational system to create a digital tutor for Navy recruits, and the recommendation is for that tutor to be adapted for every agency.
In tandem with this proposal, the government has announced more federal support for AI research. The private sector will be the main engine, but government needs to support both underfunded basic research and the kinds of long-term research in which the private sector is notoriously uninterested.
3. Automated cars and unmanned aircraft need regulation
New regulation is needed for two reasons: to protect the public and to ensure fairness in economic competition.
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2100454,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"bots,business,","session":"C"}']
The cases of automated vehicles (such as self-driving cars) and unmanned aircraft (drones) are prime examples of areas that require immediate regulations. The Safety Standards that exist for automobiles need to be updated to include their automated cousins, and the wording of regional and federal laws needs to change to allow for new permutations. The U.S. government should also invest in developing and implementing an advanced and automated air traffic management system.
Creating appropriate regulations means finding senior people in the industry to shape and create those new laws. The government will work to develop a federal workforce with diverse perspectives in order to ensure fairness.
4. No child left behind
Most people have already heard Obama’s speech about empowering the next generation. This recommendation states that all American students, from kindergarten through high school, will — as the report says — “learn computer science and be equipped with the computational thinking skills they need in a technology-driven world.”
America needs to build and sustain a researcher workforce, including computer scientists, statisticians, database and software programmers, curators, librarians, and archivists with specialization in data science.
[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":2100454,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"bots,business,","session":"C"}']
It isn’t only about teaching AI, however; it’s also about teaching safe AI. To that end, schools and universities will need to include technology-focused ethics and related topics in security, privacy, and safety as an integral part of curricula on AI, machine learning, computer science, and data science.
5. Use AI to supplement, not supplant, human workers
“A 2015 study of robots in 17 countries found that they added an estimated 0.4 percent to those countries’ annual GDP growth between 1993 and 2007,” according to the report. However, there is also the threat that AI will replace the workforce. Generally speaking, automation threatens lower-wage jobs and could potentially increase the wage gap. While the report does not yet have a suggestion for how to fix this problem, its authors do firmly declare that a solution needs to be found, and the recommendation is to study the problem in earnest and search for its solution.
That said, there is ample evidence that AI is used to its best effect when it works in tandem with human workers, rather than by replacing them. In one recent study, when trying to determine whether lymph node cells contained cancer, “an AI-based approach had a 7.5 percent error rate, where a human pathologist had a 3.5 percent error rate; a combined approach, using both AI and human input, lowered the error rate to 0.5 percent,” according to the report. It seems we are stronger together.
6. Eliminate bias from data, or don’t use it at all
The use of data needs to be combined with justice, fairness, and accountability. Artificial assistants are trained in a closed world, but then they are moved to an open world, and that change needs to be anticipated and planned for.
[aditude-amp id="medium3" targeting='{"env":"staging","page_type":"article","post_id":2100454,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"bots,business,","session":"C"}']
Take, for instance, the criminal justice system, where machine learning can help make huge strides for good. “The biggest concerns with Big Data are the lack of data, and the lack of quality data,” according to the report. If data is incomplete or biased, AI can actually exacerbate problems, rather than fixing them. No one wants a machine deciding if they’re a flight risk if it doesn’t have the information to make an informed decision.
Another area where bias can be a huge problem is in something like job application screening. In the U.K., it is illegal to deny someone a job based on a decision made by a computer; thinking in the U.S. is that the computer had better know what it’s about.
7. Think safe, think global
One of the most important conclusions in the document is that long-term concerns about super-intelligent general AI should have little impact on current policy.
The recommendation is about allowing trade secrets without allowing secrecy. The report suggests that if competition drives commercial labs toward increased secrecy, it may become more difficult to monitor progress and ensure ethical standards are being met. To that end, the authors suggest defining milestones and logging whether companies have surpassed them as a way to keep an eye on progress without divulging sensitive information.
[aditude-amp id="medium4" targeting='{"env":"staging","page_type":"article","post_id":2100454,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"bots,business,","session":"C"}']
The government also outlines a plan to monitor other countries. The idea is to develop a government-wide strategy on international engagement related to AI and to develop a list of AI-topical areas that need international engagement and monitoring. Japan, Korea, Germany, Poland, the U.K., and Italy are specifically listed as countries to partner with to this end.
The most important things companies need to be aware of are potential financing buckets for organizations that support ethics in AI and AI training, the creation of public milestones with which companies will no doubt need to engage, and new accountability standards for the creators of AI. Overall, the report has a hopeful tone, and the future seems clear. AI is here to stay, and the United States is embracing it with enthusiasm, tempered only mildly with caution.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More