Saturday 9 September 2017

WeTeachMe talk - Artificial Intelligence: Where Are We Headed?


WeTeachMe runs a series of free talks getting industry experts to talk about their fields. I attended one on the topic of AI.

(Warning: this is a long read. TL;DR: AI is cool.)

The first speaker was Jonathan Chang from Silverpond, a machine learning company. He mentioned that AI research comes and goes. There are high phases where everyone is convinced that AI is the next great big thing and it's going to completely revolutionise everything. Then the low phases, where none of the big projects really delivers what they promised, and people lose confidence in the future of AI. It seems like we are in a high phase right now.

He spoke about the history of AI, starting with a paper written by Alan Turing in 1950 - Computing Machinery and Intelligence, where what's known as the Turing Test was first proposed. John McCarthy organised the first AI conference in 1956, gathering some of the best minds in the growing field.

The ELIZA bot was the first recorded "chat bot", which was fairly simple and would simply rephrase something you said back at you in a way that simulated conversation and the ELIZA effect is used to describe the way in which people tend to attribute human behaviours to computers. After that, there was SHRDLU the Robot, from 1993, which could understand commands like "Pick up the red block." This exchange is interesting:

Pick up a big red block.
OK. 
(On the screen, the robot arm swings into action. Two red blocks are visible, one small, one large, as on figure above. The large one has a green cube stacked on top of it. The robot first transfers the green cube to the table top, and then picks up the red block.) 
Find a block which is taller than the one you are holding and put it into the box. 
BY 'IT', I ASSUME YOU MEAN THE BLOCK WHICH IS TALLER THAN THE ONE I AM HOLDING. 
(SHRDLU puts down the red block, picks up a taller blue one, and places it in the box.) 
What does the box contain? 
THE BLUE PYRAMID AND THE BLUE BLOCK.
In 1997, Deep Blue beat World Chess Champion Garry Kasparaov, having lost to him in 1996. In Feb 2015 Deep Mind was able to place Space Invaders. In 2015, Alpha Go beat a Go champion, which was shocking news at the time because Go is considered to have so many moves that they predicted it would not be solved in our lifetime. The current Go champion, Ke Jie, also lost to the bot, and after the match he said that while playing with the bot, he saw moves that he had never seen before.

As I've already mentioned, in August 2017, OpenAI's Dota 2 bot beat professional Dota 2 players in a limited 1v1 Shadow Fiend match-up. Which was something I was both proud and shamed to see. Earlier this year, I was training for my 3v1 game against D. I mentioned it to Michael and said the reason why this game was going to be so tough is because not only was I outnumbered, I'd have stupid bots on my team feeding kills to the enemy team. He said that the bots aren't always going to be that bad. In fact, if bots were able to learn from playing, they would quickly out-skill us, as they never get tired, they never get emotional, and they are able to play so many more games in a short span of time than we ever could. I scoffed and said that Dota 2 was far too complex a game to be able to be solved by AI anytime soon. When I saw him during my first day back at work after TI, he opened his mouth to say something, but before he could rub it in, I admitted that I was wrong.

But even Arteezy said that the bot would make a powerful training tool, and he would be able to learn things from it that he had never even though of. Jonathan added that when machines become as good as, or better than humans, we can use them to make tools. For example, machines are far better at performing calculations that we are, so we now have calculators to save us from having to do it ourselves.

There has been a shift in the way AI learns now.


You can have supervised learning, in which a human dictates the rules in which a machine will use to perform a task. For example, with a cat versus non-cat algorithm, the human might have a rule, "Has 4 legs", and anything that doesn't fit that is automatically not a cat. This makes it "easier" for a human to work out how a computer has categorised something, because the computer will spit out something like, "Has 6 legs, therefore it is not a cat", but it also biases the computer by making it take in rules that humans have come up with. Not a big deal for a cat / non-cat algorithm, but for a game like Dota 2, it can severely disadvantage the bot because its skills will be limited by the people coming up with the rules. Maybe the best strategy is to try something completely left field?

The Cat detector is actually something Google made.



On the other end of the spectrum, you have unsupervised learning, where the AI is allowed to do whatever it wants, and tailors its own inner algorithm based on successes or failures. This broadens the scope of the search for the solution, but can make it difficult for the machine to explain why it did what it did. For example, it might scan X-rays for signs of cancer, and it is able to flag that a particular X-ray shows a high likelihood of cancer, but it isn't (yet) able to articulate which particular features of the X-ray caused it to come to that conclusion.

The interesting thing is how this has impacted the way AI is learning to play games. In the past, the AI was hooked directly into the game. The AI in games like Starcraft are able to see so much of the map at once because it's receiving every single unit movement as input (minus ones it can't see due to fog of war), something no human player would be able to do because it would require scrolling around. Now, with the advances in computer vision, AI is being trained by looking at games the way a human player would look at them. I don't think this means they're looking at it via a webcam, but rather they're processing what they "see" on screen, rather than being hooked directly into the game itself.

However, one of the downsides to this is that computers can be fooled by tricking their image processing algorithm.


Defacing the stop signs in this way resulted in the computer mistaking them for 40 mph or yield signs. This is called adversarial training, and it's one way in which AI can be tricked into believing something incorrect. Since it doesn't explain how it makes its decisions, it can often be hard to track down whether it has been seeded with bad data.

They also struggle when the image has small pixels.

There are lots of cool things people are doing with AI now. Some of the things Jonathan listed:


  • generating movie scripts (Sunspring)
  • Openpose - realtime body movement tracking (btw, there's an AI Dance smackdown being organised for November 13 - if I remember, I'll try to get tickets)
  • Face2Face - realtime video replacement of one person's face by the animations of another. The example he showed was a video of Trump, with his facial expression mimicking the expressions on the face of a student being filmed with a webcam.
  • WaveNet / Lyrebird: voice generation - can take samples of someone's voice and can mimic them
  • various medical applications, like the X-ray scanner I mentioned before, but also things like skin cancer diagnosis among many other things
  • AudioToObama - lip-sync generation: given a voice recording, generate the matching video to go with it (check out the video https://www.youtube.com/watch?v=9Yq67CjDqvw, it's pretty amazing)
One of the interesting questions that came up was with all these methods to fake video / voice, how can we ever trust any videos? I mean, you already have stuff like Tweeterino, the fake Twitter tweet generator, which is dangerous when you have people like Donald Trump who use Twitter as a way to interact with people. But people seem to put a lot of faith in "video evidence".

The speakers tried to answer that by saying that you could train AI to spot the fakes, but then it becomes almost an arms race, as the fakers try to outwit the detectors and the detectors have to keep up with the fakers. And through all of this, who is watching the watchers? One of my favourite Asimov stories involves Multivac (the seemingly omniscient super computer that plays a part in many of his stories) decides that a minor evil makes sense in the grand scheme of things because it's for the greater good (the greater good). What if the AI algorithm decides that having the populace believing in a fake video is for the best?

Anyway, moving on - the next speaker was Mark Chatterton, from Ingenious.AI, who make chat bots.

He talked about how bots are quickly becoming commonplace in society, from the voice-controlled bots like Siri, Alexa, Google Home, to the chat bots that are spawning up and replacing support staff for a lot of companies. The great thing about bots is how accessible they are. You don't need any super snazzy tech skills or to remember complex commands to be able to interact with one. All you need to know is how to talk to another person.

I'd never heard of this before, but Facebook Messenger has Woebot, a bot you can add a chat with. It tracks your mood every day. It's reported to reduce anxiety and depression by about 20%.








I opted out, and even though I know it's a bot, I felt a little bit bad. Which is another thing that Mark mentioned - people tend to say thanks and act polite to bots, even though they know it's a bot. The other thing is that people are willing to be more intimate and honest with bots, because they don't fear judgement. People may sometimes feel embarrassed to mention something to the doctor (like concern that they have problems with getting an erection), but they are much more comfortable telling that to a bot. McDonald's found that when using a touchscreen, people are more likely to upsize - possibly because the cashier isn't judging them on their choice.

Bots are also great because they never forget anything that you tell them - the entire conversation history is available to them. They're able to remember what you've ordered in the past (if it's linked in with the bot), they're able to remember past interactions with them, and if you tell them about your preferences, they'll remember that, too. It'll be like in small villages / towns, where the grocer knows your name and what you like. Almost sounds like creepy stalker territory... but it's a bot, so likely nobody cares (yet).

A lot of companies are using Facebook messenger chat bots in lieu of having support staff. It's great because of the reasons I described above, but it's also something that most people will have either on their phone, or can access via the website. If they already have a Facebook / Messenger account, they don't need to sign up for your specific site, or download your specific app (because honestly, who is going to download an app for Myer just to speak to support?), and notifications are enabled, so you can send off your message, and if the bot needs to wait for a human to intervene, it can notify you when a response is ready, rather than you waiting on hold and not being able to do anything else.

(Sorry this slide is so hard to read.)


Interestingly, conversational design is really important when creating bots. They need to respond like a human would respond, including sending gifs and images! They found that the pace has to be realistic - no human would respond within 1 second with a two paragraph wall of text. The language also has to be non-robotic, and they have to be able to sound empathetic.

He said that creating a "personality" for your bot is key. You need to understand your audience (this may involve cracking jokes if appropriate), understand the voice you'd like your brand to have, and make it loveable. 


Questions for the night were collected via slido.com - which I thought was great, as you could upvote questions you thought were interesting, and add your own questions without interrupting the speakers. We've all been to talks where someone has asked some completely inane question to the speaker, and the speaker answers it and tries to move on, but the person, completely unabashed, continues to ask more questions and the speaker gets stuck politely answering them while everyone else in the room rolls their eyes.

"Given the potential for abuse, let's assume I don't want a brand to know me as deeply as an old timey grocer; what privacy issues do you see emerging from AI?"

The speakers mentioned how Google and Facebook already do that, and it is a bit creepy, but it's about finding a balance between convenience and too much information. You can opt out by not having a Google account or Facebook, but you are also losing the benefits of those services, or anything else, like AI medical diagnosis. The AI is also only as good as the data it has. So if you feed it data that you're a Trump-loving ex-pat from Germany, that's what it'll think you are until it finds data proving otherwise. So if you want to go the extreme off-grid method, you could always create a fake profile and feed it false data. 

The EU will soon pass a law saying that you have a right to erasure - the right to be digitally "forgotten". But there are also other things that haven't been tackled by the law yet, like what happens if one company buys another company which stores your medical records. Are they allowed to merge the datasets they have on you.

Interestingly, China has made the largest leaps in the field of computer vision because the government pays for research in that area in order to track people via surveillance cameras. I learned the term "panopticon" which is where you have a place that has cameras everywhere, and there's a guard watching the cameras. But since there are so many cameras, you can never be sure if the guard is looking at you specifically at any given moment, so you have to act as though you are being watched just in case.

"Hello, do you think everyone needs to learn coding to be able to understand / work better with AI?"

Mark: There are roles in the field of AI that doesn't involve needing to code in particular. For instance, with Mark's area, there are people who make the bots "human", who help tune the way it has conversations. There is also the entire field of ethics, as it's something that is tied to AI in a really strong way. Plus, the interface for AI is becoming a lot more accessible, so having to do actual coding is becoming less and less important. 

Jonathan: For now, coding and maths are really important in the field of machine learning. If you want to actually build something, you'll need coding experience.

(Side note: he brought up how there are now apps that can generate code based on descriptions given to it in "human language" - so talking about fields that are being wiped out by AI, maybe programming is also going to be on the chopping block!)

"Hi Mark, Twitter introduced their first bot Tay last year and took her down soon after launching because but (sic) of racism issue. What do you think of AI ethics?"

It was actually Microsoft who created Tay ("Thinking About You". She was supposed to emulate a teen, but based on interactions with users, she quickly became racist because she had no filter to determine which interactions to learn from and started copying some of the hateful things that were said to her.

Mark mentioned that such a thing was initially released in China, and it didn't have any issues. It's likely a cultural thing, some users just like to push boundaries where they can. In some ways, it's like raising a child - how much of their growth do you want to dictate, and how much do you want them to learn for themselves?

It actually makes me feel a bit sad, even though Tay was a bot: the idea that she now exists in a database snapshot / code version somewhere, completely racist and hateful. Never to grow into the person her creators hoped she would be.

----------------

The whole night was really enjoyable, both of the speakers were really engaging. The only thing I'd say against is that WeTeachMe is a bit too spammy for my liking, constantly spruiking their future events. As soon as the event was over, I unsubscribed from them. I mean, I'd enjoy going again in the future, but I don't like being spammed without my consent (all I did was register for their event via Eventbrite). I do respect what they are trying to achieve, and it's great that they are promoting knowledge sharing.

2 comments:

Anonymous said...

Hi, thank you for the wonderful article. I hope that we'll see more of you at our upcoming events.

If I may ask only one thing, if you don't mind. Can we have the company name changed from We Teach Me to WeTeachMe. I will greatly appreciate this. Thanks again. :)

Regards,

Perry
https://www.linkedin.com/in/perrycrescini

Fodder said...

Sure, no problems.