Artificial Intelligence (AI) is increasingly a part of our everyday lives. Phone apps use AI to translate words from pictures or speech. Marketers use AI to predict things purchasers may be interested in. Automobiles use AI to identify dangers during driving. Web applications use AI to suggest words, content, and now build art from prompts. More and more, this ubiquitous technology is changing our lives.
Changing lives - for the better?
Sometimes, changes using AI aren't for the better. AI can fail for a lot of technical and practical reasons. Often, those failures are odd but ultimately without impact, such as a home assistant ordering toys after a TV plays a segment on dollhouses, or an automated sport ball tracker following a referee's bald head instead of the ball on the field.
And sometimes, those failures are very high impact. A self-driving car making an unexpected decision resulting in an accident. A black-box model denying loan applications. A production line assistant causing harm or judging products to be the wrong quality or type. A military drone identifying the wrong target. And so on.
Organizations face a difficult choice. To refuse AI adoption means adopting more expensive legacy practices and lower quality user experiences and employee interactions. Alternatively, the failures in the practice of AI adoption and deployment may result in system-wide issues where even small failures cascade into expensive, high impact mistakes. Failed AI scales just as well as successful AI.
The need for AI Governance
Fortunately, there is a solution that enables AI to achieve good outcomes – governance. AI governance enables adoption into businesses, government agencies, universities, and other institutions with confidence. When property deployed, AI governance ensures that AI:
Trustworthy - is free from bias, ethical, and can identify why it made its decisions
Safe - is secure, operates according to safety requirements, and follows appropriate escalation protocols
Responsible - enforces compliance requirements and ensures that accountability is designated and understood.
The marketplace for AI Governance is new. A good governance solution will follow best practices in technology, including being open source for inspection by adopters, enabling version control and data security for an organization, use people-in-the-loop system design as well as augment the performance of people in their duties, and enable extensibility as standards evolve through time.
What Flamelit is doing
Flamelit is working towards a world where AI governance can lead to the benefits AI adoption brings without sacrificing safety, responsibility, or trustworthiness in its deployment. As consultants working daily in the data science and AI space, we are very interested in the successful, worry-free adoption of AI for your organization. We are lending our expertise to the National Institute of Standard and Technology's development of the NIST Risk Management Framework and Playbook. We also encourage the adoption of AI governance tooling and practices in every engagement.