AI Investment Study: Errors Risk Investor Losses
Summary
The CNMV published a research study analysing the investment-prediction capabilities of four advanced AI language models (ChatGPT, Gemini, DeepSeek, and Perplexity). The study found that these tools exhibit recurring reasoning errors including computational failures, financial misinterpretations, and hallucinations, with the highest error rates in simple, unstructured queries. The research recommends integrating rigorous human validation and using verified regulatory data from official sources such as the CNMV to improve reliability.
About this source
GovPing monitors CNMV Spain Press Releases for new securities & markets regulatory changes. Every update since tracking began is archived, classified, and available as free RSS or email alerts — 3 changes logged to date.
What changed
The CNMV published a comparative empirical study examining AI language models for stock investing, finding that ChatGPT, Gemini, DeepSeek, and Perplexity all exhibit recurring reasoning errors that could cause investor losses. Errors include computational failures, financial misinterpretations, and reliance on outdated or fabricated information. The study advises that unsupervised AI use by retail investors carries significant operational risks and recommends establishing human validation frameworks and grounding AI systems in official regulatory data to reduce informational noise and improve accuracy.
Financial institutions and investment advisers that deploy AI tools should review their oversight procedures and ensure human validation is integrated into AI-assisted investment decision workflows. Retail investors using AI for investment research should be aware that these tools can generate inaccurate or fabricated information, particularly when queries lack structure or context.
Archived snapshot
Apr 22, 2026GovPing captured this document from the original source. If the source has since changed or been removed, this is the text as it existed at that time.
13 April 2026
A new study by the CNMV analyses the reasoning capabilities of various
models (ChatGPT, Gemini, DeepSeek and Perplexity) when it comes to making investment decisionsErrors detected can ultimately lead to losses for investors
Using models based on official sources with regulated, standardised
information can significantly improve the quality of the results Today, the Spanish National Securities Market Commission (CNMV) published the research study "Large Language Models and Stock Investing: Is the Human Factor Required?", by Ricardo Crisóstomo and Diana Mykhalyuk, who belong to the Strategy and International Affairs Directorate-General of the CNMV. The study provides an empirical comparative analysis of the investment predictions generated by next-generation language models in the current financial environment. AI risks without human supervision The study emphasises that the use of Artificial Intelligence (AI) without human intervention carries significant operational risks, suggesting that its uncontrolled use by retail investors could result in economic losses. After analysing the results of four advanced models--ChatGPT, Gemini, DeepSeek and Perplexity--the authors conclude that these tools exhibit recurring reasoning errors, including computational failures, financial misinterpretations and reliance on outdated or fabricated information ('hallucinations'). The errors were most evident in simple queries, without structure or context, highlighting the importance of using clear analytical instructions and establishing supervision mechanisms. The importance of the human factor and the governance framework The research indicates that integrating AI into financial markets poses not only a technological challenge, but an organisational one too. For these models' generative capacity to translate into reliable results, it is essential to establish a collaborative framework in which the processing capability of AI is subject to rigorous verification procedures and systematic human validation so as to reduce associated risks when detected. For further information: STUDY ADVISES THAT USING ARTIFICIAL INTELLIGENCE CNMV Communications Department IN INVESTMENT DECISIONS WITHOUT HUMAN SUPERVISION Tel.: 91 5851530 - comunicacion@cnmv.es
MIGHT RESULT IN FAILURES, ERRORS AND HALLUCINATIONS
Logo
Use of verified information The study also emphasises the importance of using reliable, verified sources of information, as opposed to unclear and generic web content, which may contain contradictory or biased information. Furthermore, it also highlights that investment models based on regulatory data from supervisors such as the CNMV, which provide standardised and rigorously verified information, are more reliable and report fewer errors. Grounding AI systems in these official sources helps reduce informational noise, enhances data comparability and enables more coherent, accurate and reliable financial reasoning, compared with general information available on the internet.
For further information: CNMV Communications Department Tel.: 91 5851530 - comunicacion@cnmv.es
Logo
Related changes
Get daily alerts for CNMV Spain Press Releases
Daily digest delivered to your inbox.
Free. Unsubscribe anytime.
Source
About this page
Every important government, regulator, and court update from around the world. One place. Real-time. Free. Our mission
Source document text, dates, docket IDs, and authority are extracted directly from CNMV.
The summary, classification, recommended actions, deadlines, and penalty information are AI-generated from the original text and may contain errors. Always verify against the source document.
Classification
Who this affects
Taxonomy
Browse Categories
Get alerts for this source
We'll email you when CNMV Spain Press Releases publishes new changes.
Subscribed!
Optional. Filters your digest to exactly the updates that matter to you.