Tags

, ,

source: economictimes.indiatimes.com

John Wyatt, pediatrician and research scientist, conducted an excellent webinar on “Artificial Intelligence and the Future of Healthcare”. This webinar is part of a helpful series of webinar by International Christian Medical Dental Association (ICDMA) which has been held weekly since the start of the COVID-19 pandemic. Focusing on healthcare, Wyatt highlighted the rapid progress Artificial Intelligence (AI) has made in healthcare. Initially using machine learning on algorithms and pattern recognition, it improved much since them. AI now appears in the real world as Babylon, a phone app that diagnoses medical conditions better than general practitioners (GP) in the UK and Woebit, another phone app that talks and courage the depressed. This is only the visible part of the iceberg. Thousands of AI is already embedded in medical devices and robotic machines in the hospitals and is actively engaged in diagnosis, ensuring patient safety, and even in surgery. Wyatt’s Christian response to AI in healthcare is that AI do not provide the human solidarity that face to face with another human being provides.

John Lennox, apologist and mathematician, wrote in his 2020 book 2084: Artificial Intelligence and the Future of Humanity about the rapid rise of AI especially since 2012 with deep learning, the evolving of neural networks seems to be beyond human understanding. While acknowledging the use for AI in a wider perspective in general especially in our social media, surveillance, big data, and self-driving cars, Lennox is careful to point out:

It is clearly one thing to try to build AI systems that seek to mimic aspects of what the human mind can do; it is an entirely different thing to try to recreate what it feels to be human. Consciousness bars the way (kindle 153, 2020).

Lennox made the argument that (1) AI can mimic the human mind in thinking but (2) AI cannot have what we call consciousness. AI with pattern recognition and algorithm needs a lot of data input before it can ‘think’. Even in deep learning, it will need to be programmed with millions of possible outcomes before it provide an outcome of its own. Hence AI ‘thinking’ is different from human thinking. As one AI expert pointed out, a young child can be taught what an elephant is by giving that child one picture of an elephant. An AI will need input of millions of photos of elephant just to be able identify the animal. The other argument is that an AI cannot have a consciousness or a soul no matter the type of programming it receives. Consciousness is what distinguish a living human brain and a dead human brain.

One discussion that Lennox gave a lot of space to is whether AI can evolve into a superintelligence like God. Lennox argues against that stating the created cannot surpass the creator. In my article Artificial Intelligence and God I offered a different argument. A superintelligence, if it is even possible to build, cannot transcend space and time. Whether it will possess the will to power and dominate like human beings is best left to science fiction writers and their dystrophic futures.

AI is an emerging technology and as responsible stewards, we are to control and guide its development. Like any technology, AI has the potential to improve human flourishing. It also has the potential for human destruction. Therefore, like any technology, it should be use to serve human beings