News

WEF researching use of AI-generated patients and data in clinical trials – LifeSite

This article was originally published by The Defender — Children’s Health Defense’s News & Views Website.

(Children’s Health Defense) — Global political and business leaders are convening this week in Davos, Switzerland for the annual meeting of the World Economic Forum (WEF), where one of this year’s central themes focuses on “artificial intelligence [AI] as a driving force for the economy and society.”

But as world leaders prepare to discuss AI, a WEF project – first announced in 2019 – is already funding research on the use of “synthetic” AI-generated “patients” and data in clinical trials.

The synthetic data would be developed via AI – and would subsequently be fed to AI to further “train” it.

The U.K.’s pharmaceutical regulator, the Medicines and Healthcare Products Regulatory Agency (MHSA) was the recipient of £750,387 (approximately $950,000) in funding for this project, which appears to support currently ongoing research in at least two U.K. universities – the University of Birmingham and Brunel University London.

READ: Argentina’s Javier Milei denounces ‘bloody abortion agenda’ at 2024 Davos summit

The U.K. government’s Regulators’ Pioneer Fund funded the project 2022. The funding appears to originate from a January 2019 agreement between the U.K. government and the WEF, “to lead [the] regulation revolution to foster industries of the future.”

As part of this agreement, the U.K. became the first country to partner with the WEF’s Centre for the Fourth Industrial Revolution,” building on an “existing collaboration” in the area of AI.

Proponents of using AI to create “synthetic” clinical trial data and participants say that it is far less expensive than using human participants, addresses privacy concerns connected to data collected from humans and can improve “equity.”

But experts who spoke with The Defender expressed concerns over how this technology could be used.

Mary Holland, president and CEO of Children’s Health Defense (CHD), said it’s easy to “understand the appeal” of synthetic clinical data generated by artificial intelligence.

“Gaining real patient data is time-consuming, expensive and actually requires patient informed consent,” Holland said. “Artificial data, by contrast, require none of that. How convenient – and lucrative – for Big Pharma,” Holland said.

Brian Hooker, Ph.D., CHD’s senior director of science and research, described AI-generated “synthetic” patients and data as “an extremely frightening proposition.”

“Using AI to generate data – for control and experimental groups of ‘patients’ – appears to be an avenue to cut more corners in woefully under-tested approved vaccines like the Pfizer and Moderna mRNA COVID-19 vaccine,” he said.

Dr. Meryl Nass, an internist, biological warfare epidemiologist, said, “Pharma does not want real clinical trials before it can sell its wares. Most drugs and vaccines die during the trials, because people simply don’t behave like mice or rats in the real world. And clinical trials are very expensive.”

“Enter the fake clinical trial,” Nass said.

However, there’s a “catch,” according to Holland. “The data may be unbelievably wrong.”

READ: Pope Francis praises Klaus Schwab, World Economic Forum in message sent to 2024 Davos summit

“This plan for AI clinical trial information reminds me of Soviet election results where, remarkably, the sole candidate on the ballot always received 99% of the vote – and this was even without AI,” she said. “One can imagine similar AI clinical trial results for medical products, like the COVID-19 injection clinical trial results with 99% purported efficacy.”

Michael Rectenwald, Ph.D., author of “Google Archipelago: The Digital Gulag and the Simulation of Freedom,” said that “synthetic” clinical trial participants and data are part of the broader agenda of the WEF and its founder and executive chairman, Klaus Schwab, to introduce AI into many sectors of society.

“Schwab and the WEF think that the future belongs to AI,” Rectenwald said. “They think AI will make elections unnecessary, and now they are saying that AI will make clinical trials for vaccines, with real human subjects, obsolete. Such AI measures are open to rigging the results and the WEF would like nothing better than rigging vaccine trials – and elections,” he said.

Proponents tout ‘huge potential’ of using AI to generate synthetic patients, data

According to a 2023 paper published in Clinical Cancer Informatics:

“Synthetic data are artificial data generated without including any real patient information by an algorithm trained to learn the characteristics of a real source data set and became widely used to accelerate research in life sciences.

“Synthetic data mimic real clinical-genomic features and outcomes, and anonymize patient information. The implementation of this technology allows to increase the scientific use and value of real data, thus accelerating precision medicine in hematology and the conduction of clinical trials.

“Namely, facilitating access to training data often without such a steep cost to patient privacy, providing better access to validation or benchmarking datasets, filling gaps in data that would otherwise exist, and boosting sample sizes.”

This concept appears to build on the existing practice of using “surrogate endpoints” in clinical trials, where instead of measuring whether trial participants feel better or live longer, or looking at outcomes such as whether participants had a stroke, researchers use proxy measurements that the researchers merely expect to be predictive.

According to Bertalan Meskó, M.D., Ph.D., director of The Medical Futurist Institute, “artificial patients” can be defined as “a set of data representing the desired human characteristics … based on large amounts of real patient data, without actually including any backtracable real-patient data.”

“Artificial patients can be the answer to more than one problem of modern medicine,” including patient privacy,” Meskó wrote. “One day, virtual patients might become the go-to tools” for “estimating efficiency and potential side effects of promising drug molecules or optimising the use of existing ones,” modeling the success rate of new medical devices, or substituting for the placebo control group in clinical trials.

READ: Zelensky calls for direct military intervention of European nations in Ukraine at WEF Davos meeting

“Using real-world data as a patient group in a trial, often known as a synthetic control arm, can make research trials more efficient – companies don’t have to enroll as many people in clinical trials and can guarantee that those who apply will indeed receive the treatment,” he wrote, adding that “Synthetic control groups can also improve equity in clinical research.”

“As many hope, one day artificial patients may be able to completely substitute humans and animals in clinical trials, most likely with animals being the first,” Meskó wrote.

The overview of the U.K. government-funded project states that assigning human clinical trial patients to control groups “can be challenging in some health conditions, as random assignment to a control group could deny patients access to treatments that could extend their life or improve symptoms.”

The project overview adds that “Many clinical trials also find it difficult to recruit enough patients, particularly those investigating rare diseases.”

However, “Recent improvements in computing power have allowed researchers to create artificial patients, with similar health information to real patients in clinical trials,” the project overview states. This “artificial data” could then help “‘boost’ smaller clinical trials, lessening the number of patients needed to be successful.”

Artificially generated information could also be used to “better reflect groups in society that are less well represented in clinical trials, including different age groups and ethnicities,” according to the project overview, which also states:

“In the future, these approaches could be combined with, or even replace, real patient information. Success in this project could help to change the way clinical trials are performed in common and rare diseases, lowering their cost and improving how new treatments are tested.”

The two ongoing studies, at the University of Birmingham and Brunel University London, will remain in progress until early 2025. They have also produced at least one peer-reviewed paper, soon to be published in Heliyon.

According to a publicly available version of the paper, “Advanced synthetic data generators can model sensitive personal datasets by creating simulated samples of data with realistic correlation structures and distributions, but with a greatly reduced risk of identifying individuals.”

READ: WHO’s Dr. Tedros says new global pandemic is matter of ‘when’ not ‘if’ at 2024 Davos summit

“This has huge potential in medicine where sensitive patient data can be simulated and shared, enabling the development and robust validation of new AI technologies for diagnosis and disease management,” the paper adds.

However, according to the paper’s authors, “The under-representation of groups is one of the forms in which bias can manifest itself in machine learning [and] may also lead to structurally missing data or incorrect correlations and distributions which will be mirrored in the synthetic data generated from biased ground truth datasets.”

A new approach developed by the researchers, BayesBoost, purportedly overcomes these challenges by demonstrating “an excellent ability to identify under-represented groups within data given a sensitive attribute and a target disease” and “by generating new synthetic data that do not deviate from the real data distribution.”

‘We must insist on clinical trials with human subjects’

Even proponents of using AI to develop “synthetic” patients and clinical trial data note that there are shortcomings with the technology.

According to Meskó, “While using artificial patients for drug development or medical device development is a promising field, there is a long way to go until the models can reach the required complexity while being truly representative of the human population.”

Noting that AI models and datasets are imperfect and contain “biases,” he said, “If we let machine learning and deep learning algorithms develop on these synthetic, imperfect datasets, chances are they will come to conclusions that are more or less false in the real world.”

The authors of the Regulatory Focus paper questioned “under what circumstances, if any, would it be acceptable for AIaMD [artificial intelligence as a medical device] to be trained or tested upon synthetic data versus real data” and if there “are there opportunities to use synthetic data to better validate or test AIaMD models.”

But experts who spoke with The Defender expressed concerns about the very results that may emerge out of such models – and potential consequences for the public at large resulting from drugs and treatments clinically tested using such models.

READ: The influence of the occult-inspired World Economic Forum is finally waning

Booker said that AI-generated “synthetic” clinical trial participants and data are very likely to be problematic, as science today still lacks key knowledge about human physiology.

“AI is only as good as the algorithm produced for simulation and there is so much we don’t know about human physiology, especially at the population level,” he said. “It is hubris to believe otherwise as well as extremely dangerous to the unwitting souls that this will be foisted on.”

Nass proffered a hypothetical example. She said:

“Let’s pretend measuring an arbitrary antibody level – without demonstrating it is a surrogate for real immunity – will take the place of an actual efficacy test. All you have to do is inject someone with your experimental product, then bring them back in 2-4 weeks and take a blood sample, which invariably shows that antibodies are now present.

“Those antibodies may prevent disease, they may enhance disease (make it more severe), they may increase your risk of disease or they may do nothing you can measure. Doesn’t matter. Voila! That was easy. License issued.”

“The adoption of ‘AI clinical trials’ seems to be yet one more way that WEF corporate overlords seek to turn the global human population into planetary lab rats,” Holland said.

Nass said that, for pharmaceutical companies, it’s “cheaper still … to not use people at all. Just model them.”

“Why not, since the FDA now seems to think its job is mounting a convincing charade of a regulatory agency?” she questioned.

Other experts also tied efforts to develop “synthetic” clinical trial participants and data to broader WEF agenda items, including efforts to prepare for and counteract a hypothetical “Disease X” that the World Health Organization and others warn could cause the next pandemic.

“The push to vaccinate against Disease X within 100 days would most likely use this flimsy substitute for actual patient data in order to meet their implausible goal,” Hooker said.

“Disease X” is one of the agenda items at this week’s WEF meeting in Davos, while in 2021, the U.K. government and the Coalition for Epidemic Preparedness and Innovations (CEPI) announced their “100 Days Mission” to build the capability to develop a vaccine for a future “Disease X” within 100 days.

“We must insist on clinical trials with human subjects, no matter how much the WEF and its ‘public-private partners’ try to dazzle and bamboozle us with technology,” Rectenwald said.

CHD.TV will cover the Davos meetings all week.

This article was originally published by The Defender – Children’s Health Defense’s News & Views Website under Creative Commons license CC BY-NC-ND 4.0. Please consider subscribing to The Defender or donating to Children’s Health Defense.

Previous ArticleNext Article