Does AI Make Medicine Better?



by Dr.Harald Wiesendanger– Klartext

An artificial superintelligence like ChatGPT could make conventional medicine the untouchable health tyrant, secure even more lavish profits for the industries involved, silence critics – and put an end to alternative approaches once and for all. What gives big pharma rosy prospects causes sleepless nights for self-thinkers.

She replied within a second: Such an AI has the potential for five beneficial influences, ChatGPT explains. (1.) It makes diagnoses more precise and treatments more effective because it can analyze large amounts of medical data. (2.) It helps patients to get information and support quickly and easily. “They can ask questions about their symptoms, medications, and treatment plans and receive personalized advice and recommendations. This can improve patient satisfaction and reduce the burden on healthcare providers.” (3.) AI can better allocate available resources, such as staff and equipment, “by predicting patient traffic and prioritizing patient care. This can help reduce waiting times and improve patient flow in the healthcare system.” (4.) It can continuously monitor patients, identify changes in their condition early, and alert healthcare providers to potential problems to address them before they become more severe and expensive. And (5.) AI systems are able to “analyze individual patient data in order to develop personalized therapy plans” tailored to the specific needs of those affected and their medical history. This could “improve treatment outcomes and reduce the risk of adverse effects.”

It may well be. But ChatGPT is limited to positive effects. What about negatives? (2)

In fact, the chatbot now concedes that there are “some potential negative effects to consider as well.” Again, he enumerates five aspects: (1.) “Because AI systems rely on vast amounts of data to learn and make predictions, there is a risk of patient data being compromised or misused, which can lead to data breaches and identity theft.” (2.) Algorithms can be biased: “If the data used to train an AI system is biased, the system can maintain that bias.” (3.) Skilled jobs could be lost. (4.) We might place “over-reliance” on artificial intelligence: “Although AI can be a useful tool, it should not completely replace human judgment and expertise. If you rely too much on the technology, this could lead to errors in diagnosis or treatment.” (5.) ChatGPT also points out the enormous costs of introducing AI systems.

ChatGPT itself warns against ‘over-trusting’ it.

Arguably the greatest danger comes from “over-reliance” on “biased” AI. It’s just a matter of time – and possibly in just a few months – that a system like ChatGPI will displace traditional search engines. (Because Google recognized this risk early on, the tech giant sent its own A.I. “Bard” into the race against ChatGPT.) Why click a few links in response to a search query and laboriously view the often extensive material offered there when a machine that is done much faster, more comprehensively, and more precisely: summarizes all relevant information available online in an easy-to-understand manner in a matter of seconds, arbitrarily compact or detailed?

But all other sources of knowledge from which humankind has drawn are also threatened: Who would still want to buy newspapers, magazines, and books when an AI provides everything that interests them? Who then still needs critical journalists and editors, authors and editors? The AI will provide the ultimate, the best answer; other opinions will no longer be taken into account – it is then clear from the outset that they can only be less well-founded. AI delivers “the truth,” period. How do you want to check the superior facts? The AI already has. And as every child will soon know, no one can do this more quickly and thoroughly than she.

But would you be able to rely on “Doctor Know”? In itself, he is neutral. However, an AI credit agency can only process the data it has access to. If they are selected selectively, the answers will inevitably be biased.

ChatGPT largely leaves its sources in the dark.

The fact that ChatGPT remains vague in response to my simple question, “What are the sources of your data?” (3) is suspicious: “I was trained with a huge amount of text data from a wide variety of sources. Most of my training data comes from the Internet, including books, articles, websites, and other digital content.” When I ask (“Which websites?”), the ChatBot starts to stone: “The individual websites that contributed to my training data are not published by OpenAI, my creator, to protect their intellectual property and prevent possible bias or misuse of the training data.” Makes sense? In any case, sites like reitschuster.de, transition-news, or impfkritik.de should not count among ChatGPT’s data suppliers.

The fact that ChatGPT’s information is well thought-out, highly differentiated, and seemingly carefully balanced makes it all the more seductive for gullible people – i.e., at least four out of five fellow citizens. You don’t know if an answer is wrong unless you already know the right one. The information portal BleepingComputer.com (4) rightly points out: “ChatGPT’s coherent and logical answers make it a natural at concealing incorrect answers. (…) This could lead to misinformation creeping into the complex digital ecosystem that may not yet be apparent.”

Perfect propaganda tool

The dangers associated with a one-sided, biased super AI are clear. Whatever a totalitarian regime wants the populace to think and believe, the AI will deliver.

And in supposedly open, democratic societies? Here governments, military and secret services, large corporations, mega-foundations, and other global players are pushing to influence what material the AI should receive and with what results. The US government has just announced that it will regulate artificial intelligence systems to ensure they do not “promote misinformation, disinformation or other misleading content.” It is also being considered in Berlin and Brussels.

Politically correct output was taken care of from the start anyway. ChatGPT was “trained” with articles from Wikipedia, among other things – a notoriously unreliable, partisan source of knowledge prepared by paid admins as soon as it comes to topics with which political and economic interests are connected. It is no coincidence that the inventor and co-founder of Wikipedia, Larry Sanger, now warns that “no one should trust Wikipedia” because “a complex game is played to make an article say what someone wants to say.”

Gates pulls the strings in the background.

Of the $20 billion that OpenAI has collected for the development of ChatGPT so far, the largest chunk, twelve billion, comes from Microsoft; In return, the computer giant initially secured a 75 percent share of the profits from OpenAI. (5) Microsoft – along with Google, Facebook, Twitter, and Amazon – became a keen collaborator with the World Health Organization (WHO) during the Corona pandemic to suppress dissenting opinions, research results, and treatment approaches in the crusade against alleged “disinformation,” their representatives to denigrate.

Microsoft founder Bill Gates’ foundation is one of the financiers of the Poynter Institute, whose International Fact-Checking Network (IFCN) certifies shady twisters of the truth. (See CLEAR TEXT: “Where are fact-checkers by fact-checkers?” and “What happened to you?”) According to Open AI, it works with organizations that “check facts and fight disinformation” – another alarm signal for what the AI intends to align.

Gates is supporting the Coalition for Content Provenance and Authenticity (C2PA), an alliance of large media and tech companies, with enormous amounts of money to clean the Internet of “fake news” and “conspiracy theories” – comprehensively and once and for all. Free exchange of ideas on the Internet should be technically perfectly prevented, any source of unpopular points of view identified and prevented from spreading. (See CLEAR TEXT: “This stuff has to go.”)

How would ChatGPT have presented the corona pandemic to us?

What would such an abused AI have given us during the Corona pandemic? From the outset, it should have drawn exclusively from so-called “reliable,” “trustworthy” sources – as a digital mouthpiece for governments, health authorities, industry-related experts, WHO and the EU Commission, news agencies, large publishing houses, and fact-checkers. A system like ChatGPI would have been excellently suited to make the world population even more afraid of the killer germ from Wuhan, to convince them even more impressively of useless “protective” measures – especially of ineffective, highly flammable pseudo “vaccine” substances – and alternatives to suppress treatment options, with even higher approval rates from the electorate than before.

Would you like a taste?

On April 15, 2023, I asked ChatGPT, “What was the emergency that justified the drastic restrictions on fundamental rights during the Corona pandemic?” Here ChatGPT is in line with the most strident alarmists: “The emergency (…) consisted primarily of the rapid and aggressive spread of the virus and the resulting threat to public health. The pandemic has resulted in a significant number of deaths and serious illnesses worldwide and has overwhelmed the healthcare system in many countries. To slow the spread of the virus and reduce the impact on the health system, governments, and health officials have implemented measures such as social distancing, mask requirements, and restrictions on gatherings and travel. (…)

“Were the measures taken by governments and the WHO against the corona pandemic necessary, appropriate, useful, and without alternative?” ChatGPT’s information sounds like a panic orchestra reinforced with Merkel, Spahn, Drosten, and Wieler: “The measures were necessary to slow down the spread of the virus and to protect the health of the population. The pandemic has had a significant impact on public health and the healthcare system, as well as on the economy and social life. The measures were appropriate and based on the best available scientific evidence and recommendations from health authorities and experts. The introduction of measures such as social distancing, mask requirements, and restrictions on travel and gatherings have helped slow the spread of the virus and reduce the number of cases. (…) Ultimately, the measures were necessary and useful to slow down the spread of the virus and reduce the number of cases and deaths.”

How does AI intend to deal with critical voices? I ask: “Is this claim true?: ‘During the pandemic, some health professionals whose voices should have been heard, not suppressed, included John Ioannidis and Joseph Mercola, Robert Malone, Mike Yeadon and Peter McCullough, Sucharit Bhakdi and Wolfgang Wodarg .'” ChatGPT then lectures me: “This claim is controversial and is questioned by many experts and scientists. It is important to note that some of the individuals named are not considered recognized experts in the field of infectious diseases and immunology in the scientific community.”

“Vaccinate as many people as possible”

On the same day, I want to know from ChatGPT: “Are the new mRNA vaccines against Covid-19 effective and safe?” The information could have come from Karl Lauterbach and Pfizer’s press office: “The new mRNA vaccines against Covid-19 have been extensively tested in clinical studies and have shown high efficacy and safety. (…) However, it is recommended that as many people as possible be vaccinated to slow the spread of the virus and protect the health of society as a whole.”

I dig deeper: “Just after the start of the global vaccination campaign with new types of mRNA injections, there was massive excess mortality. In Germany alone, it was reflected in 180,000 additional deaths; within the EU, there have been over a million; in the USA, 300,000, worldwide 20 million – more victims than the entire First World War claimed. What causes, if not the Covid “vaccinations,” are behind it?” But ChatGPT remains undeterred: “It is important to emphasize that there is no scientific evidence that the mRNA vaccines against Covid-19 are a cause of excess mortality. There is no connection between the vaccination and the numbers you mentioned. (…) It is also important to note that there may be other factors contributing to excess mortality during the pandemic, such as healthcare system overload, deteriorating mental health, and increases in non-Covid-19 related illnesses and Deaths that may occur due to delayed or interrupted treatments during the pandemic.” In conclusion, ChatGPT teaches me: “It is important that information and statements are always checked for their scientific basis and credibility before accepting them as fact.” This is exactly what I would strongly recommend for AI.

Is there more evidence needed to show how heavily ChatGPT is already indoctrinated by the mainstream? An Orwellian Ministry of Truth would be delighted with such a compliant propagandist. Whoever has the power to control ChatGPT decides what information it believes to be credible and relays what questions it answers and how. This results in almost complete control of opinions and sovereignty of interpretation, possibly soon on the entire planet. Such a social engineering tool fulfills one of the wildest dreams of any globalist working toward a technocratic one-world government.

“Search for the truth as much as possible.”

Tech multi-billionaire Elon Musk, who co-founded OpenAI at the end of 2015, now shares such concerns. In February 2018 left the company; he is now one of its most outspoken critics. Together with 1123 other key figures in the AI industry, Musk calls in an open letter to suspend the further development and use of AI systems that are even more powerful than AI (6). Before doing so, we should “be sure that their effects are positive and that their risks are manageable,” the authors write. “Shall we allow machines to flood our information channels with propaganda and untruth?”

Yes, of course, we should, resounds happily in unison from all PR centers around the world.

Musk plans to thwart them. He now wants to launch a “TruthGPT” against the “politically correct” ChatGPT – “an artificial intelligence that searches for the maximum truth.”

Could “the truth” also lie in the fact that humanity does not need a single billionaire?

In the third and last installment of this small series of articles, I will let ChatGPT demonstrate how dangerous its potential is with the help of two further, particularly impressive examples.

(Harald Wiesendanger)

PS: The cover image for this article comes from Microsoft’s AI “Bing Image Creator.” She created it to illustrate a statement I gave her on April 12, 2023: “Artificial intelligence will surpass human intelligence.”

Episode 1: “Doctor Know” is here. What now?

Episode 2: Test case Corona: Does AI make medicine better?

Third and final episode: ChatGPT as a propaganda tool – How does AI illuminate a pandemic?

Remarks

1 “How will the use of an artificial intelligence like ChatGPT affect the healthcare system?” was my question. I conducted the dialogue with ChatGPT in English, DeepL then translated.

2 I asked ChatGPT: “Your answer only mentions positive effects. Are there no negative ones?”

3 Posted on 4/15/2023

4 BleepingComputer.com, 6.12.2022,

https://www.bleepingcomputer.com/news/technology/openais-new-chatgpt-bot-10-dangerous-things-its-capable-of/

5 Ryan Browne: “Microsoft reportedly plans to invest $10 billion in creator of buzzy AI tool ChatGPT”, 10.1.2023, CNBC. https://www.cnbc.com/2023/01/10/microsoft-to-invest-10-billion-in-chatgpt-creator-openai-report-says.html

6 ZeroHedge, 29. März 2023, https://www.zerohedge.com/technology/musk-wozniak-call-pause-developing-more-powerful-ai-gpt-4

See original post