By David Stephen
Sedona, AZ –There is a new [March 21, 2026] report on NBC News, Iran fires missiles at remote U.K.-U.S. base, claiming long-range capabilities it previously denied, stating that, “Iran has fired missiles at the joint U.K.-U.S. Diego Garcia military base in the Indian Ocean, claiming the strike shows it is capable of longer-distance attacks than previously known.”
“The distance of the attempted strike could indicate that Iran has capabilities for long-distance attacks that it has previously denied, with the base the same distance from Iran as much of central Europe. It is unclear, however, if the missiles carried a payload or how far such an attack could truly reach, as neither missile reached its target.”
Iran is not as powerful as the United States militarily, but in this war, Iran is showing that it has the capacity to hurt. And it is still unclear, to what extent, as the conflict ploughs on.
If Iran is able to hit specific targets in the Middle East, causing damages, losses and disruptions, it indicates that the protective coverage of a military superpower may not be as thorough, as sophisticated weapons become more ubiquitous.
There are several nations in a number of continents that the United States can hit, and they would not be able to respond in any consequential way, which may result in them surrendering early enough.
However, for Iran, that is not the case, given its military facilities and coveted geography of overseeing a chokepoint — the Strait of Hormuz.
If precision missiles are leveling up countries that should not stand a chance against a superpower, what does it mean in an era of artificial intelligence?
AI
There are lots of governments and corporations that can now build AI base models, with capabilities of all kinds — positive or otherwise. This means that all the risks that major AI companies warn about can be obtained by some AI model of some country or company, even if they do not allow access [for public use.]
So, it is possible to have private AI models [be able to] do many of the things [that are not allowed] under guardrails by the leading AI chatbots.
It means those private AIs can suggest, recommend, refer, connect dots and do so much negativity, without any regulation or outrage.
The datasets to build AI models are already public. The processors to build it are obtainable one way or the other. The architecture and deep learning libraries are public knowledge, so there is nothing that says it cannot be done, however intensive, so long the team has resources.
Now, while there is often an outcry, when some AI model is misused in public, and then adjustments follow, it does not seem like there is a need to make any AI model safe or aligned to certain values, if it will not be used in public. So, the raison d’être, for AI safety and alignment are principally business prudent.
Also, there is no safety architecture that means that an AI base model cannot be trained unless it is safe or aligned. Most safety and alignment are done, post-training.
This makes it obvious that general AI safety and alignment may not exist.
Is AI Safety and Alignment Futile?
AI is already used in war and will continue to be a tool of combat. Humans have factions. Tensions will often flare sometimes resulting in conflagrations like the Iran war.
Because there is no foundation training architecture for safety for AI, it cannot refuse to be used at war, as its decision [or its non-existent agency]. Also, there are ways to have misaligned and unsafe AI cause problems, if suddenly made accessible to the public, in some situations.
The erstwhile rush towards benchmarks and evaluations may not be as necessary, since even lower tier AI can cause substantial damage when unsafe, unaligned and used as targets for certain outcomes.
The whole AI safety interest has been largely driven commercially and now, it is shown that except some new innovation can match training with safety, track AI models that connect to the internet for certain safety marks or have advanced models seek out unsafe models, using a fresh technique that checks against those on a leaderboard, then AI safety and alignment maybe futile.
Conceptual Brain Science
The possibility for general AI safety could stem from explorations of improbable architectures, taking cues from conceptual brain science.
It is possible to look at new ways for penalty data, weakening from certain algorithms where there is no safety and then the need to have some base model time out if unconnected to the internet, where it may get vetted.
These could be established, and explored, as paths towards general AI safety and alignment, because what unsafe AIs maybe used to do between enemy nations in the coming era, could be devastating.
It is possible to underscore the approach from the postulation in Conceptual Biomarkers and Theoretical Biological Factors for Psychiatric and Intelligence Nosology.

