By David Stephen
France24, the global news cable channel of the Republic of France, has shown more of ChatGPT and OpenAI in recent weeks than they have of their own home AI star company, Mistral.
Mistral is supposed to be France’s answer to the dominance of US AI companies, yet it is hardly promoted, marketed, advertised or even referenced on their own cable station.
Though Mistral is reported to have revenues from Europe and some from the United States, there are several possible markets in AI that could be opened to Mistral with enough campaign, through France24, especially because the station is the staple of viewers in several French-speaking [and other] geographies where they would welcome Mistral AI products, but they do not even know about the company. That loss is a major question about the commitment of France, collectively, to lead in AI for their homeland, or to bear the torch for the rest of Europe.
There is a recent feature in FT, Has Europe’s great hope for AI missed its moment?, stating that, “Mistral was founded on the idea that it had discovered more efficient ways to build and deploy AI systems than its bigger competitors. If Mistral fizzles, then Europe’s businesses and consumers will have little choice but to depend on a handful of American — or Chinese — platforms. For many European leaders and companies, having no sovereignty or influence over a technology that has the potential to affect every corner of work, culture and society is a nightmare scenario. Technical benchmarking sites, such as RankedAI.co, place Mistral among the world’s top 10 model developers. Mistral also has several prominent French companies as customers, such as bank BNP Paribas, the shipping company CMA-CGM, and the telecom operator Orange. But Mistral insists that it is global: a third of its revenue now comes from the US, where its customers include consumer giant Mars and tech companies IBM and Cisco. European customers include online retailer Zalando and enterprise software maker SAP.”
France is hosting the AI action summit from February 10 – 11 in Paris. France had planned to host, after the AI safety summit in the United Kingdom in 2023. The summit has a number of agendas [Public interest AI, Future of work, Innovation and culture, Trust in AI, Global AI Governance], but the nation of France does not have an AI safety institute or a dedicated AI safety research lab, working on frontier possibilities for AI safety and alignment.
Although there was an announcement LNE and Inria sign ambitious partnership agreement, stating that, “Inria [L’Institut national de recherche en informatique et en automatique] and LNE [Laboratoire National de Métrologie et d’Essais] signed a framework agreement defining their partnership roadmap, with the aim of setting up an “AI Evaluation” program through an AI evaluation center carrying out research, experimentation and control activities at the highest level worldwide.”
There was also a post in December, 2024, Announcing CeSIA: The French Center for AI Safety, stating that “The Centre pour la Sécurité de l’IA or French Center for AI Safety is a new Paris-based organization dedicated to fostering a culture of AI safety at the French and European level. Our mission is to reduce AI risks through education and information about potential risks and their solutions.”
The seriousness of the risks of AI exceeds side efforts without a major national dedication against present misuses and what may be ahead with AI. Even some of the nations with AI safety institutes, like the UK were not given access to the weights of frontier AI models for evaluation, and they also have not been able to provide any technical answers to problems of fake images, videos, generated malware, AI voices, or fake texts. So, for France, there is a longer road.
While France may announce projects at the summit, they did not have to wait until the summit to make the announcement for AI safety, where speed to get ahead of advances in AI development seems important to avoid being left behind
France does not also have an AI workforce research lab, where intense modeling on options for what people would do as AI replaces people, task-by-task, across endeavors. There are announcements every week of some new features by frontier AI companies, wit AI agents.
The nation that would win the AI race would be the nation with the safest AI and the nation with what can be termed human-side AI. This means that as AI replaces people at jobs, a solution that provides what people would do would be better embraced than one that leaves everything to the wild.
If AI can teach, then AI possesses knowledge and can displace at work. If AI displaces, what are human resource models to accommodate humans, on rotation or tier, with segmented remuneration? What is the possibility to use AI to learn harder skills, not just to pass exams, so that it is easier to develop major competence to enable new value for duties for the population?
France is doing everything to host a major summit, but the foundation for success for their homeland after the summit is missing. The coordination that should also come from France24 and others, for local AI startups is not there. France does not have an edge in theoretical neuroscience towards informing possible new architecture to lead AI advancement, alignment, or safety. France does not also have a dedicated math AI lab. There is a lot that France could have done within, with less efforts than for a summit that may quickly seem unnecessary, like the last few ones, in AI.
They will host the event, but the work that is ahead for the republic and for Europe may not inspire confidence that they would surpass the two major nations, in other continents, that are in intense competition and far ahead of them.
There is a recent feature on TIME, Inside France’s Effort to Shape the Global AI Conversation, stating that, “One of the actions expected to emerge from France’s Summit is a new yet-to-be-named foundation that will aim to ensure AI’s benefits are widely distributed, such as by developing public datasets for underrepresented languages, or scientific databases. Her second priority is creating an informal “Coalition for Sustainable AI.” AI is fueling a boom in data centers, which require energy, and often water for cooling. The coalition will seek to standardize measures for AI’s environmental impact, and incentivize the development of more efficient hardware and software through rankings and possibly research prizes.”