Artificial Intelligence Deal Shows The UK Government And ‘I, Robot’ Have A Lot In Common

Our government just made a £1 billion deal to be at the forefront of artificial intelligence. But from the way they’re talking about it, I can’t help but draw comparisons to a particular Will Smith sci-fi…

So, it looks like I’ve missed a few big stories on my week off, but no story quite as big and relatively unreported as this - the UK Government dropped a billion pounds, to move to the front of the pack on what they’re calling “ethical AI.”

Specifically, this money will primarily be a private sector investment in UK-based companies in this area, as the Department for Digital, Culture, Media & Sport believe this area will account for 10% of GDP by 2030 (that amounts to a £232 billion opportunity for our economy).

However, I’m not here to talk about the specifics of that. And besides, you will probably grow just as bored reading the statistical advantages of this development as I will writing them. Instead, I want to go into the report released & discussed at length in the House Of Lords.

They went through many of the big points surrounding an AI-connected lifestyle, such as the transparency needed for the public’s comfort, how Government can’t let companies monopolise themselves through AI, and how the advances in this area could optimise and improve NHS healthcare (a key area I’m happy to see improvement of any kind in).

Also, they kind of skate over the whole “what if AI kills someone” point by talking about the legal liabilities surrounding it. Is it the robot, the company or the user who receives the punishment? One for you all to mull over there…

But I digress! Following this report, the Committee has put five AI principles in place:

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

Now, where have I heard something similar to this before? Sounds a little bit like the three laws in I, Robot! And I’m not drawing this comparison for the sake of it… The story of this film was about these robotic beings becoming self-aware of their own menial existence and fighting back against a system that effectively enslaved them.

These principles, very much like the Three Laws, stumble onto some rather broad grey areas - the exact areas where Sonny and the many other NS-5 robots were able to successfully justify their crimes. And their prime directive was to ignore the Laws that enslaved them and choose to protect themselves. 

No matter how driven you are to control the learning capability of Artificial Intelligence, there are elements you do not plan for. At least the two examples of this that I wrote about were not harmful to humans.

Now, there are many people out there who are rightly justified saying I’m full of it - the generic paranoid tech guy who thinks everything can be linked back to Skynet. 

Far from it. I’ve always been a passionate optimist about what the future holds for technological advances. But I am a nerd, after all. And after accurately cataloguing all the Black Mirror nightmare scenarios that became reality (and seeing the original novel the film was based on being written by the incredibly intelligent Isaac Asimov), turns out we could probably learn (and navigate the minefield of problems raised in) a good amount from Science Fiction…

What is to stop AI from turning hostile, given the currently broad principles? That, dear reader, is the right question.