blog
    Machine Learning Gone Wro ...
    14 June 21

    Machine Learning Gone Wrong with Janelle Shane

    Posted byINE
    facebooktwitterlinkedin
    news-featured

    A.I. Humorist Janelle Shane joined us at redefINE last week and gave an intriguing talk focused on artificial intelligence and the many ways machine learning algorithms can get things wrong.

    She kicked things off by sharing examples of computer programs attempting to generate information traditionally created by humans such as recipes, cat names, and candy heart messages, which yielded hilarious results. After seeing these outcomes, she attempted to train a neural network to name pies by providing a list of 2,000 existing pie names. The results included Cherry Pie with Cheese Fashions, Mothy Mincemeat Cheese, Cranberry Yaas, and other less-than-appetizing but humorous names.

    This led her to a discussion as to whether or not she believes A.I. is actually smart. From her perspective, today’s A.I. capabilities are very narrow and the applications tend to be the most successful when the requests are equally as narrow. She said, “If you think about the A.I. software that’s widespread and useful, this is software that tends to do one specific thing at a time.” According to Shane, the most frequent occurrence of A.I. problems take place when people attempt to be too broad with their requests. An example she provided was Facebook M, which was a virtual assistant application.

    The intended purpose of Facebook M was to provide users with an A.I. digital assistant to answer common questions or to complete simple tasks, such as arranging flower deliveries or reserving tables at restaurants. In the event a request was too complex, paid contractors were on standby to jump in and assist with completing a request. While simple requests were made, many users chose to make incredibly specific, and often ludicrous, requests requiring paid contractors to be used more heavily than anticipated, resulting in a serious expense for Facebook, and a project that never fully took off.


    Shane continued her talk by discussing the narrow view today’s A.I. has of the real world, which is due in part to the data it is provided during training. She says A.I. applications often take shortcuts to ensure an outcome is achieved instead of having the ability to logically consider ways to work through a problem. “You know from experience that its performance is not flawless, so when you start to use it on problems that are just a bit broad you already have to tolerate mistakes,” she said.

    For example, an A.I. application was trained to review medical records and identify cases where patients had medical maladies. Surprisingly, the A.I. application successfully identified a majority of the cases where patients required treatment however, after analyzing the process and results, it was found the program made its determination solely based on the length of the case being reviewed. During its training, the A.I. learned longer medical records were those most often containing medical problems, which was the basis it used for its determinations.

    Other real-world examples she provided were instances of self-driving cars being involved in accidents many would have thought to be avoidable. In one instance, a car was randomly braking on its own due to an image of a stop sign appearing on a billboard which was identified by the vehicle as a real stop sign.

    In another, a vehicle hit a pedestrian crossing the street because it had only been trained to identify people walking in crosswalks and no other areas. Shane used these situations to emphasize her point of narrow world-views causing many A.I. applications to still depend on human intervention. “Effectively using A.I. relies on a lot of human help, so you have to have smart humans to constrain the problem that’s narrow enough for today’s A.I. to solve,” she said.

    Because of this, Shane says if a problem is broad and the consequences of getting it wrong are serious, even if they are only serious for a select number of people, it is not a good application for A.I. From her research and years of experience, she says today’s A.I. is not in a place where it can fully understand what we’re asking for, forcing us to be smart about how we use it.

    Her talk was followed by a Q&A session with the audience where she was asked what some of her biggest lessons learned were, how close she thinks we are from the A.I. seen in movies, whether we should worry that our reliance is getting ahead of technology’s current capabilities, and much more.

    If you’d like to watch the recorded session with Janelle Shane, you can do so here.

    Hey! Don’t miss anything - subscribe to our newsletter!

    © 2022 INE. All Rights Reserved. All logos, trademarks and registered trademarks are the property of their respective owners.
    instagram Logofacebook Logotwitter Logolinkedin Logoyoutube Logo