top of page
Search
Writer's pictureZulf Choudhary

AI Governance

Updated: Apr 12



Why we need AI governance, and we need it now. We have had a lot of talk but no practical steps by anyone.


AI is so topical, and many see it as a new gold mine. It is a great question but often wrongly addressed. Yes, it will impact jobs but not as people expect; yes, it will make life easier for others but worse for some. AI is banded around, especially in marketing circles the way that Wild West salespeople sold snake oil as cure for all alignments.


Estate agents, marketing companies, training and even accountancy firms say they are AI powered, as if it gives the service or product extra credibility. AI is banded around, especially in many marketing circles the same way that Wild West salespeople sold snake oil as cure for all alignments. Snake oil got debunked by science and education, AI is the new wild west, with data as gold. But we need rules as AI has huge impact on humans.


Reason for rules


Generally,  Artificial intelligence or AI is  a set of algorithms and programs that try to simulate human intelligence processes in computer enhanced machines. Machines can process data faster, look for patterns where we cannot connect the dots and produce answers to solve many problems faster than humans. This form of AI raises many ethical and moral issues. Hopefully, I will make it clear why AI governance is vital and will be resisted by many companies. What we mean by governance is a system or framework of accountability including data protection principles and to demonstrate compliance for users as well as the company and legal norms.


The reason for good governance rules is that all data science is subject to errors of commission, which are built in by human designer (deliberately) or errors of omission or process bias which stem from how AI systems are built.


  • Gathering Data is gathered and ordered by researchers.

  • Train the model some people build a learning algorithm.

  • Evaluate the model by some internal business logic and processes.

  • Deployment by a company or organisation.

AI can be seen as three types of systems in one program or separate programs for particular tasks. We can better understand the importance by understanding nature of AI systems.


Robotics


If I build a machine to build cars or ships using routines and set processes it is not using AI but simply following set routines. We have had these for over 100 years.


General reasoning AI


If I build a machine that learns about its environment or analyses data sets by cataloguing objects and ordering them into meaningful groups. That is generalised AI, such as Alexa or a Tom-tom navigation system. It solves a particular problem,  It is considered weak AI system, and is easy to look at  its routine and processes to understand it.


Reasoning AI


If I get the same machine to make reasoned decision or actions based on previous outcomes or data I am adding another layer. That is reasoning AI or generative AI, as it chooses between two actions based on data sets and rules I have created for it. Chat GPT or Microsoft Copilot are prime example of generative AI. It follows rule-based decision making to produce answers or permutations. These systems reconfigure queries and searches the web for data to present me with options. It may have the ability of machining learning capabilities. That is machines, which learning your habits, key words you use and places you visited online and suggest similar places for you. Google, Facebook, YouTube, work in the same way.


Problem solving AI


If my machine not only learns about its general environment and data types, but I add reasoning AI, so it can solve problems. For example, the sensors it has learns that it is in a forest, recognise trees and sense that there is river 5 feet wide, in front of it.  The problem of crossing the river can be solved by the machine using its reasoning and data gathering capacity to figure out that few chopped trees can be used to create a bridge. Problem solved. This is strong AI.


Human emotions can be recognised and anticipated by machines as researchers are doing in Japan to recognise emotions in humans and mimic them. The machine becomes an autonomous AI machine, which can learn and think for itself. These machines not only have, like humans’ prior analytics (a priori) allowing them to use deductive logic, which comes from definitions and first principles. It also has posterior analytics (a posteriori), which comes from observational evidence. I use the terms in the Kantian sense. This gives my AI machine a huge advantage as it can shift through huge amounts of information and produce solutions, like in the film ‘I robot’ based on Asimov novel.


The meta problem


Now it gets interesting for AI governance. Just look at the sample diagram of how AI model is created below.



These steps are

Step

Action

Issues

1

Collect data

Bad in gives bad out results. Data issues

2

Data preparation

Man made pattern are what it picks out. Algorithm bias?

3

Machine learning

Connect the dots we have programmed them to do. If we do not program the system to see the dots, it will not see it!

4

Validation

We validate the system through internal processes only.

5

Deployment

The aim is it to obtain results that are better than before that add value and are monetised.

6

Improvements

Improvements to the model in the light of outcomes.

 

A technical issue


The heart of many AI systems is often a zero-sum game approach, which is a term from game theory and economics, where one participant's gain or loss is exactly balanced by the losses or gains of another. The theory has limited applications, however. In fact, Von Neumann and Morgenstern were the first to construct a cooperative theory of n-person games. In this model, cooperative or altruistic behaviour leads to progress as has happened in human society through cooperation and coexistence.


Why AI governance is important



I hope you can already see that viewing AI as ‘wonder kid’ is misguided and AI is not an objective paradigm or processes as it raises issues of trust, transparency and responsibility of wider stakeholders and is open to abuse. This is despite the bland social responsibility statements, firms are focused on making profits, and maximising benefits to their shareholders. Usually if they can get free resources such as data and use it for profit, companies will with no thought to others. This is why we need AI governance.


Forgot for the moment, that many companies are buying pre-packaged platforms that they have not structured themselves, so they are not aware of trust issues they face.


For AI governance it highlights a problem of conflict of interests and data privacy. Other questions arise. Can I really trust the data that goes in or come out. If not, I need the right to view that information as it could infringe by data privacy and ownership rights. Are the algorithm biases in any way or designed to be inclusive.  Who checks them? Will large companies, especially in medical healthcare allow such transparency, when it is critical to their survival? Is there a body I can turn to a third party to ensure my rights?


This is where the role of independent governance plays a vital part. Are the people in that role independent of the company they are acting for, given the fact that the organisation or company by pay be for the role? Or will I be like the MBA heavy board of directors who look after their own interests, then the company and that is it!


Also, who has ownership of the data?  It is often assumed if data come freely and at no economic cost or ownership.   It raises the issue of rights and payments.


Let me give you one or two examples of data bias build into systems.  Ask the question ‘When did World War Two start’?


European history books state 1 September 1939, and online sources e.g. Google search produce the invasion of Poland. Yet the Japanese had technically started WW two on 7 July 1937. Most Americans think that the war with Japan started on Decembers 1941. In fact, Japan invaded Northeast China in 1931, but between 1937 and 1945, China and Japan were at total war. The European war was a continuation of the wars started in Asia.


This is simple historical fact that is often even in history books is stated incorrectly. But a search by an AI machine with Eurocentric bias would produce the same bias answer AI robot would make the same mistake.

In other words, so called ‘facts,’ lies, self-deceits are built into our societies social and economic fabric  that humans need for high end decision and judgements. Our bias permeates all our systems so how can we solve the problem of bias?


The following information is from an internet search I did recently.


  • Fail: IBM’s “Watson for Oncology” Cancelled After $62 million and Unsafe Treatment, Misdiagnoses errors.

  • Fail: Microsoft’s AI Chatbot Corrupted by Twitter Trolls; flooding the bot with a deluge of racist, misogynistic, and anti-semitic tweets. It learned to be a nazi!

  • Fail: Apple’s Face ID Defeated by a 3D Mask

  • Fail: Amazon Axed their AI for Recruitment Because Their Engineers Trained It to be Misogynistic.  Amazon trained their AI on engineering job applicant résumés. And then they benchmarked that training data set against current engineering employees.

  • Fail: Amazon’s Facial Recognition Software Matches 28 U.S. Congresspeople with Criminal Mugshots produced 40 percent false matches as they used a data base of people in prison of black faces.

You see, AI is not all that it’s made to be. Especially in sale marketing or politics where people seem to live in bubbles created by IT people in bubbles and their own echo chambers. These companies can afford to lose millions on mistake, SMEs cannot.


In conclusion


We need to be aware of and always on the lookout for our bias, be it gender, cultural, social or personal, otherwise we will build large AI models that do not solve global problems but networks that reflect the same divisions and inequalities in society that we have now. So, what will AI do, despite all the promise of AI it will lose its real benefits. We will then look for new. We need systems to have the following attributes.

  • Transparency by design, not an after-thought.

  • Data usage rules explicitly made clear by the builders of the systems.

  • Any assumptions or bias made clear as some will need more governance than others.

  • Any infringement of personal data or data sharing or profit without the owners express consent and agreement.

  • Independent and robust governance oversight with ability to sanction.

  • Governance misbehaviours or misdeeds must have legal enforcement.

This will enforce trust and ensure that AI is used for good. We need AI Governance to stop the unscrupulous.


By Zulf Choudhary


10/04/2014

19 views0 comments

Recent Posts

See All

AI For Good

Often AI is given a negative image but it’s power for good is greater. For example, knowing what we know about how people report things...

コメント


bottom of page