In the ever growing world of automation there are many things that go go wrong. Automation is a great tool to acquire, fill, manage, and process data. Using it makes mundane tasks much more controlled and easy due to the shear fact that there is less overhead in the head for the user/programmer. However, will all the benefits of AI and machine learning, there are bound to be downfalls and hiccups. This article, the author talks about how Facebook’s algorithm terminated his account 30 minutes just after its creation. He had locked down his account to maximum privacy (as much as facebook allows you) and he posted a comment saying “I hate facebook.” 30 minutes later he was banned and received two invitations by email to log back in. Each time he logged in it just told him he was banned.
The reason he talks about this is because he says that companies who formulate these AI “sweepers” are completely under control. A good example to to explain what he is saying is the YouTube subscriber purge incident. A few months ago, users of YouTube noticed that they were being mass unsubscribed from their favorite YouTube creators, and the YouTube creators also noticed that their subscriber numbers were dropping off the face of a cliff. After enough public outcry, YouTube finally addressed the situation as a “bug in the algorithm”. A few months later, viewer numbers dramatically dropped, and again the same statement by YouTube was issued saying that “there was a bug in the algorithm”. Many speculate that YouTube (and other companies) know exactly what they are implementing, they just don’t want to tell you.
These kinds of situations are very huge in testing. When you are testing for quality and outcome. especially in AI, they need to decide whether or not the algorithms they introduce or the UI that is developed will actually make or break anything. From this, I learned that software testing is a key factor in making sure what you want to happen, happens.