Blog

Key Takeaway's: EO on Safe, Secure, and Trustworthy AI

Written by Waits Sharpe | Oct 31, 2023 6:20:46 PM

Artificial intelligence (AI) has captured the entire world's attention over the past year as innovative learning models and generative AI have become a staple in people's everyday lives. Beginning with ChatGPT, the AI craze has proven itself to be more than a phase, and many companies are implementing machine learning and artificial intelligence in their processes and workflows.

In January of 2023, Microsoft made a multibillion-dollar investment in OpenAI to integrate ChatGPT into its own tools and software. This includes integrating ChatGPT with the Bing web browser to enrich search results for users. Microsoft isn't the only company investing in AI. Google has also begun to develop their own generative AI model titled "Bard". In addition to conversational chat bots, many companies are using AI to aid in graphic design. For many, the future of AI is encouraging. Others are more hesitant as innovation may lead to job displacement or the spread of misinformation.

Regardless of whether one is concerned or intrigued by the future of AI, one thing is for certain: Artificial intelligence must be designed and implemented with security in mind. As more organizations begin to implement AI, the potential cyber risks associated with it skyrocket. In response to this growing concern, the Biden Administration has issued an executive order addressing how America is to manage the risks associated with artificial intelligence. Here are several takeaways from Biden's executive order on "safe, secure, and trustworthy artificial intelligence".

Creating AI Security Standards

Artificial intelligence currently lacks standardized controls, or implementation guidelines regarding safety and security for individuals. The Biden Administration's executive order seeks to develop "standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy". This responsibility falls to NIST to set the guidelines for how AI is tested and deemed secure prior to release. It will also be required that developers share their safety test results to the U.S. government in accordance with the Defense Production Act.

Any company in the process of developing a model that may impact the nation's security must submit their testing results to ensure their AI model is safe. Another crucial aspect of creating AI security standards is to protect Americans from misinformation and fraud by developing tools to detect AI-generated content. The Department of Commerce will be in charge of clearly labeling AI-generated content so that individuals can tell the difference between official material and misinformation.  

Protecting Privacy

Privacy for every individual and company is an integral part of cybersecurity. Protecting personal, identifiable information, or medical information is required by law. Artificial intelligence runs the risk of breaching this privacy if not given any guard rails. These learning models use personal data to train and develop their responses. This executive order seeks to prioritize "privacy-preserving techniques" when developing new AI models. 

Responsible Government Use of AI

According to the executive order, AI can "help government deliver better results for the American people. It can expand agencies' capacity to regulate, govern, and disburse benefits". While AI may benefit government agencies, it also may pose risks to individuals.

As part of Biden's executive order, the administration calls for clear standards and guidelines for protecting individuals' rights and safety, as well as for improving AI, and implementation. The government also seeks to hire more AI professionals to provide training for employees in various fields. The President also seeks to help agencies "acquire specified AI products and services faster, more cheaply, and more effectively through more rapid and efficient contracting". 

Artificial intelligence isn't going anywhere. As AI develops, the controls and security processes we use must evolve to mitigate new risks. This executive order is another step in securing AI and ensuring privacy and security for all citizens.

CorpInfoTech (Corporate Information Technologies) provides small to mid-market organizations with expert I.T. services, including security assessment, cybersecurity penetration tests, managed services (MSP),  firewall management, and vulnerability managementCorpInfoTech can help organizations, quantify, create, refine, and mitigate the risks presented by business threatening disasters in whatever form they may be disguised.