Code in the Chaos: How AI Mirrors Us
- Robert Magana
- Jun 11
- 4 min read

We talk about AI often, and algorithms within social media, and often their flaws, but in reality, those flaws are actually built into the dynamics you see every day.They are not functioning improperly; they are actually doing exactly what they are trained to do. The difference is now we take notice because it is driven towards something we disagree with passionately.
Now, we often hear during events like COVID or after the George Floyd movement: algorithmic bias.We hear that now with Trump’s poorly managed and laughably called “rebellion,” which is really just people who think the administration has sucked at handling a variety of problems.Now, I'm not saying these things can't or don't exist, or even purposely so, but in most cases, it comes down to social dynamisms that are often unavoidable.
First, there's data. Now, data is only as good as its entry, whether you like it or not.So, without ferocious vetting in the process of how that data is obtained, your data is skewed drastically.The better, more realistic, and authentic the data is, the better the model.That can mean, unfortunately, large violations in privacy and consumer data acquisition.
Why does that matter?
Because AI is only as good as the data that it is given.Imagine a switch, but instead of manually coming on and off, it has a system designed to recognize its environment with the data entries—i.e., is it dark based on what it can see—and then it turns on.
Now, the complexities with social media, which is broad-spanning, are that it's designed to entice consumer interests with things you may like.Now, it bases this off whatever it can gather on you—this can be browser usage, product interactions, it can also be illegally gathered data about the environment you're sitting in right now.It all depends on the legality and consumer consent of what it is told to work with, and the legality and precision of how that data is gathered, and the data itself.
So, hold onto that thought and think of the movement of information and the communications of systems designed to algorithmically present to you—which is almost all of them these days—as a biosphere:An intercommunicating collection of environments designed to do its best to do that very thing.
One environment can be bad, another good, but the communications tend to happen anyway. Now, some companies do vet for all of this mentioned and do their best to clean, but then there's the issue of quantity, which quantum dynamics helps out a lot with.But ultimately, these models are still far from human. Why?It's because we're still not entirely sure how the mind conjures thoughts and then works through them.
Now, models of the neural network of brains and stimulation are making advancements, and as those grow, so will the deducing capabilities of AI—which is getting there and eventually will.
That doesn't really do any good now, and of course, there's the next issue:Where, in some instances, yes, misuse of algorithmic AI can be purposely done to generate bias.Like in a regime-like country whose state actors and systems do have that level of mass outreach, or just a non-practicing DEI company in its recruiting models that sift candidates without human personnel or ethical human personnel to monitor.
It can also be done with the use of too much information that is useless to the current task at hand for the AI—or too little.The brain has a process to determine what is and what isn't useful; AI really just has what data it is given and decisioning models based on that.
So let's circle back then to social media and algorithmic negative instances and social dynamisms that occur during mass events.These can be a good thing in some cases—like, let's say, helping sick puppies—but it can also be bad.And yes, in some cases, often with bots, be manipulated to construe the perception of something for a user.
Social media systems will often communicate with mass users and show you things based on your interests or what you're viewing to keep you using the app.The problem is, without complex decisioning models and knowing how—and also applying how—to vet for critical thinking patterns used to variably fact-check information at scale like the brain does,the system will just regurgitate whatever it has based on that subject or topic you're invested in.
This can be based on mass groups and what they are viewing, and so on.So, you can see if one thing is either forcibly or not, presented en masse, how it will affect a single user viewing the item.
So, it spreads almost similar to a virus—much like any idea does—which can be good or bad.This is why production in security and defense, as well as privacy, and the improvement of data models and AI processing, is so important.Essentially, the better all that gets done, the better for you.But right now, you have to operate throughout your daily life while others will purposely skew information for their own gain.
We now have an administration that pretty much does the same, which, from a national security standpoint, is a huge red flag.Like it or not, though, we've all seemingly realized this, and the growth of companies like Nvidia, CoreWeave, OpenAI, Microsoft, Alibaba, and Diginex is seemingly inevitable given what they operate in and how.And we'll likely see more similar companies come to fruition with innovation—assuming we're not all dead.