With the rise of artificial intelligence (AI) and machine learning (ML), Eliga’s Test Manager, Jon Hall, asks, ‘Will AI change the future of testing for better or worse?’
The rise of Artificial Intelligence
According to Semrush, ‘The global AI market is predicted to snowball in the next few years, reaching a $190.61 billion market value in 2025.’
Despite this prediction, ‘A whopping 93% of automation technologists feel underprepared for upcoming challenges regarding smart machine technologies.’
Additionally, a lack of trained and experienced staff presents another obstacle in the man versus machine debate. It is clear from the statistics that we have reached a point where the demand outstretches the skills.
There are ethical concerns about AI too, regarding embedded biases. According to HBS, ‘AI can help identify and reduce the impact of human biases, but it can also make the problem worse by baking in and deploying biases at scale in sensitive application areas.’
The role of testers is changing with AI/ML
The role of the software tester is evolving with the introduction of new and more complex technologies. The tester will continue to fight for what’s right, playing a vital role in mitigating the risks of automation, AI and machine learning. Technology is only as intelligent as we design it to be. According to Deloitte, ‘as open-source AI models are becoming more user-friendly, there might be more AI applications built by people who do not completely understand the technology.’
Transparent AI can help address these barriers. It ensures that AI is understandable, helping show tested models and determine decision-making. It also mitigates issues of embedded biases, especially when it comes to issues of fairness, discrimination, and trust.
The lesson of US Airways Flight 1549
A great film that has a good scenario arguing for the human over a computer is Sully: Miracle on the Hudson. It tells the story of the flight that had to make an emergency landing on the river Hudson in New York when birds struck the engines. The film details computer simulations versus real-life scenarios.
In a pivotal scene (read the full movie speech here), Captain Sullenberger argues that computer simulations do not take into account the human factor. Although human-piloted simulations showed that the pilots could have made it back to the airport, this simulation did not consider what it feels like to experience two engines simultaneously failing for the first time.
Sullenberger argued that the pilots in the simulator ‘knew the turn and exactly what heading to fly.’ Not only did they not run a check, but they also did not switch on the Auxiliary Power Unit (APU). The simulation did not give adequate time for analysis or decision-making. In these situations, simulators remove ‘the humanity out of the cockpit.’
Sullenberger argues when examining human error, the human factor must be considered. Consequently, he urges the panel to upload the link adding a 35-second delay in response time. The simulation runs with modified parameters. In each simulation, the plane crashes.
Can machines think like humans?
‘The day a machine or computer can think as humans do; the sooner Terminator 2 Judgment Day could occur, and security agencies will realise what machines can do.’ – Jon Hall
Machines and computers are still a very long way from demonstrating anything close to human intelligence. However, AlphaGo beat humanity at its own game in 2016, competing against legendary Go player, Lee Sedol. Although AlphaGo beat Sedol, games like Go represent a small fraction of the human experience. Speaking with WIRED, Deepmind’s David Silverman acknowledged, ‘The real world is massively complex, and we haven’t built something, which is like a human brain that can adapt to all these things.’ However, he adds that the ‘brain is a computational process’; noting that it is a lot less ‘magical’ than some people think.
Presently, AI is great for sifting through large volumes of data and making recommendations. In this way, it can spot mistakes and errors in code for correction.
‘A human-centric designed application is still going to need my team of testers crawling over it before I sign it off for production.’ – Jon Hall
Reimaging the future
What’s your opinion on Alexa? Do you think she is listening to your conversations and learning? Imagine what Alexa will be able to do in 5-10 years. Think of iRobot set in the year 2035, cars drive you everywhere and robots do everything for you. Drones will potentially replace your postman; hovercrafts will replace cars – it is interesting to see what the future will hold for technology.
I think it will be some time before we make ourselves redundant, but this is an interesting topic and I would be glad to hear people’s thoughts on it. The AI/ML journey is only just getting going and I am fascinated to see how it will be applied to testing on projects.
To Summarise
Next time you are using your day-to-day website, your favourite social media app, or playing a video game, think about the time and effort spent on the overall lifecycle – not just the coding aspect, but also the quality aspect. This is what we do, live and breathe in our jobs.
Whilst testing is taken for granted, we should be grateful it exists or potentially you could face some frustrating behaviours of the system.
Want to learn more about Eliga’s technology services?
Visit our technology page to understand how we work and what services we offer.
Follow Eliga on LinkedIn to see our company news.