Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NursingAnswers.net.
Title: Artificial Intelligence in Healthcare: Using AI to make medical diagnoses
a) Artificial intelligence (AI), sometimes called machine intelligence, is the ability of a digital computer to perform tasks imitating intelligent human behaviour (Encyclopedia Britannica, 2019). This can also be extended to any device that displays human mind-related characteristics such as the ability to reason, solve problems, or learn from past experiences. AI can be applied to almost every medical field, and its future contribution to healthcare research and delivery seems endless. Though its impact has been restricted, AI has been used to build healthcare technology since the 1970s (Ross & Webb, 2019). An article recently published in Forbes cited the study published by ‘The Lancet Digital Health’ that compared the performance of deep learning – a form of AI used in detecting diseases from medical images versus that of healthcare professionals. The results showed some incredible evidence of advancements that AI has had in recent years (Martin, 2019). Amazon also just announced that its AI-powered service, which is capable of extracting text and data automatically from scanned documents, Textract is now HIPAA eligible, making it another AI healthcare tool available for use in medical care (Hendrickson, 2019). In this paper, we will discuss the ethical/technical implications of AI in healthcare in more depth.
b) It has been found in recent years that artificially intelligent computer programs can now more accurately diagnose skin cancer than a board-certified dermatologist (Rigby, 2019). In a study done by ‘The Lancet Digital’, they found that in disease diagnoses, AI has become a more accurate source of diagnostic data. (Martin, 2019). Nonetheless, this technology creates a huge set of ethical challenges that must be addressed correctly because AI technology has the potential to threaten patient’s safety, preference, and privacy. One main ethical question that arises is: ‘What happens if the wrong decision is made by an AI system – who is responsible for this error?’. In cases of misdiagnosis, the wrong set of treatment being given to patients could put them at greater risk of other life-threatening illnesses or even death. This is also an ethical issue as patients could be given false hope of getting better but are just misdiagnosed. In turn this could be even more detrimental to their health, both emotionally and physically.
In 2012, doctors at Memorial Sloan Kettering Cancer Centre partnered with IBM to train Watson to diagnose and treat patients (Chen, 2018). According to IBM documents, the supercomputer had frequently given bad advice. For example, when a cancer patient with severe bleeding asked to be given a drug to help prevent it, a medication would be given that would make the bleeding worse. From the source stated above a doctor said: “This product is a piece of s—” to one of the IBM executives. In this case, who is to be blamed for the ill training/data fed to Watson?
c) Although this technology is still in lapse, there is a lot that needs to be tested before it is released for commercial use. Medical misdiagnosis is not uncommon as recently a group of John Hopkins researchers in Baltimore reviewed tissue samples from 6,000 cancer patients in the US and found out that one in every seventy-one cases, a patient was misdiagnosed (Ainslie, 2018). A solution to this problem, as proposed by IBM, would be (Fitzgerald, 2012):
- Feed past and recent cancer research and – with patient consent – individual medical record into the Watson
- Test the Watson with more complex cancer scenarios and assessed with the help of an advisory panel
The computer’s ability to grasp all this knowledge and its ability to find the right diagnoses
in seconds would help doctors keep up with the enormous amounts of daily new information. In addition to this, “Watson can even be instructed about ‘individual patient preferences’, Kohn said” (Fitzgerald, 2012). For example, it may be taken into account when considering treatments that a patient feels strongly about hair loss. This shows its ability personalising itself to fit the patient needs which is crucial in making the patient feel comfortable. Misdiagnosis may be a thing of the past by taking the necessary steps to properly train the technology. When done correctly, the diagnosis and treatment of patients would be much faster and more efficient, enabling patients to have longer/healthier lives.
- Encyclopedia Britannica. (2019). artificial intelligence | Definition, Examples, and Applications. [online] Available at: https://www.britannica.com/technology/artificial-intelligence
- Ross, J., Webb, C., & Rahman, F. (2019) “What is AI? A primer for clinicians”, Artificial Intelligence in Healthcare, pp: 8 https://www.aomrc.org.uk/wp-content/uploads/2019/01/Artificial_intelligence_in_healthcare_0119.pdf
- Rigby, M. (2019) “Ethical Dimensions of Using Artificial Intelligence in Health Care”, AMA Journal of Ethics
- Martin, N. (2019) “Artificial Intelligence Is Being Used To Diagnose Disease And Design New Drugs”, Forbes
- Chen, A. (2018) “IBM’s Watson gave unsafe recommendations for treating cancer”,
- The Verge
- Ainslie, J. (2018) “An Epidemic of Misdiagnosis: Using AI To Solve A Quiet Crisis In Healthcare”, AI Business
- Fitzgerald, J. (2012) “IBM, NYC hospital training Watson in cancer”, The Seattle Times”
- Hendrickson, Z. (2019) “Amazon’s AI-powered Textract is now eligible for HIPAA compliance”, Business Insider
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on the UKDiss.com website then please: