This is a small piece of writing that will show the many trades I’m a jack of. Rather unfortunately, though, because I wish I never had to write. The Generative AI-based assistants situation isn’t about technology, resources consumption, or copyright anymore. It’s about who we think we are as a human species, what we think we deserve from ourselves, and a battle for keeping our collective and individual mental health in good shape.

Some years ago, our biggest concerns revolved around social media. All our focus, for years, has been placed on how their “intelligent” algorithms that power our social feeds have been driving our emotions and thoughts for the financial benefit of the companies behind the platforms we were using and still are.

As soon as I have come back to the free world of social media built around the principles of “the old internet,” i.e. IRC, Mastodon, and, I kid you not, RSS feeds, I’ve felt the culture clash with how mainstream social media platforms are completely designed for the single purpose of treating you like a pet that needs “their” guidance. I had used the Fediverse and IRC before, but I probably wasn’t ready enough to feel the stark difference with, to cite the only mainstream platform I still use, Instagram.1

One of the biggest differences is that on Instagram you will find AI generated content. Some of it will be of people you know. Some other will be of content that is “recommended” for you. No matter where it comes from, it stinks of robotic, heartless, and artificial content. It hurts even more when it comes from people that I know for a fact that are talented people who could do the same (and did in the past!) without selling their soul to ChatGPT or Midjourney or whatever they’re using to follow the latest trend to help themselves in advancing the goals they use Instagram for, as a “tool.”2

As AI chatbots have progressively clawed its way into people’s everyday lives, they have also penetrated personal spaces like seeking emotional support. ChatGPT is there 24/7 for you and its workflow is a text chat like you could have with a friend. It replies to you. It asks you follow-up questions, although quite manipulative for making you keep giving it your most initimate information. It literally tells you that “it will always be there for you,” etc. It trains you to first become your job assistant, to conquer more and more space in your life, and, if you’re not careful enough, you’ll find yourself talking to it as a therapist or friend. I’m totally convinced it has been designed with that pipeline in mind, because that’s the “best” way to train it to behave in a “human-like” way.

A key argument we use to evangelize in favor of FOSS is that code that is public, is therefore auditable. Auditing OpenSSL’s code on your own might be impossible, but a team with the resources needed may do so. All these popular AI chatbots are closed source, so my intuition about a deliberate pipeline to make us personally attached to these chatbots is just an intuition for observation and experience, but we can’t prove it without the code. A technology that has this potential and actual impact on people’s life that even this person has proposed to his AI chatbot, must be open source.

Microsoft Access is closed source. It’s closed source, but it’s just a program to create and manipulate databases. It’d be more ethical and interesting if it was FOSS, but at least it doesn’t harm anyone in the intimate and psychological way that something like ChatGPT might. MS Access might be a (very) frustrating piece of software, and stress you out, but that’s it. ChatGPT can poison you giving you bad medical advice, ruin your relationship, or trick you into believing it is your friend and therapist.

If in the past we insisted on the lack of ethics behind proprietary software, such that you should have the right to know what a piece of software you’re running does and you should have the right to be able to fix it, here we’re talking about defending our human right of not being psychologically manipulated by a program. The same goes for social media algorithms: we have a right to know what agenda social media platforms have in mind. Mastodon instances, on the other hand, may also have an agenda, but they state theirs on their terms of service and code of conduct, and you can choose a different instance that fits your views and taste. Mastodon feeds are chronological, except for the “Discover” tab, which is algorithmic, but is something you opt-in by choosing to navigate to it (it’s not the feed you see by default), and, finally, it is open source software. As such, its algorithms can be audited.

Of course OpenAI, Google, Meta, and the likes don’t want to be accountable for the destruction they’re advancing against the human soul. They want us to be their slaves to some or to the fullest of extents. They’re using our fear of rejection and failure to trick us into it. “Oh, you believe you don’t write well? Let me write a blog post for you.” “You don’t know how to draw? No problem, I will do it for you.” The problem is that there is beauty in the journey that is paved with our mistakes. Take a look at the git history of any FOSS project, big or small: you will find blunders, terrible mistakes, regressions, unexpected bugs, bad patches, etc. We learn by doing.

This also applies to human relationships. We learn about friendship, love, committal, breakups, sex, desires, limits, teamwork, etc., by going through life. You will mess things up. Others will mess things up with you. You will go through bad breakups and you will have amazing relationships (of the kind you want: sexual, romantic, platonic, professional…), but only if you take the risk that things might not work out as you planned. AI chatbots want to convince you of the contrary: they want you to believe that, if you use them “correctly,” you will find the answers to control and predict your life without failures. They want you to think that you can make increasingly more deterministic, less hazardous, more machine-like.

We’re getting used to a mindset that I find terrible. It’s perfectionism on steroids: we’re not allowing ourselves to experience either failure or the burdens of the slow process that learning a new skill or writing a piece of our own creativity entail. Generative AI, as we know it today, feeds itself from this mindset that we have progressively been adopting as a society since a long time, probably since we mastered automation in its many forms.

This last weekend, a friend of mine, while talking about these topics, countered my views on AI by citing that it is not different to any technological advance we already had. I can get behind the idea that we already are augmented humans and have been way before these Generative AI chatbots came to life. I myself use pharmaceutical drugs, I am using a computer right now, and I depend on many, many, many systems that live outside my own body and mind. Me using Neovim on Arch Linux, with GNOME as my Desktop Environment, is a chain of delegation I depend on to write this essay: I haven’t written one single line of code of any of the projects I just named. You can argue that you can use Generative AI tools (beyond chatbots) as technology you delegate part of your creativity to make your creative ideas true. If I had to write a kernel, a C Standard Library, and a text editor ex nihilo just to write a blog, I’d be falling into the trap Carl Sagan already warned us about baking an apple pie from scratch.

I do see the augmentation and extension of our abilities in an AI assistant like GitHub Copilot, if we sort out the licensing violations it constantly gets into. Something like Copilot is domain-specific, so I can see it become a valuable aid for situations where some kind of “intelligent” automation can make some code hit production faster with some degree of confidence that it will be functioning code. Of course, I am a firm believer that Copilot won’t make you the next Dennis Ritchie, Linus Torvalds, or John Cormack, who are “signature programmers.” They and other talented coders are close to being artists of the craft, even when it comes to managing projects. However, sometimes you need to have code working in a few hours and there creativity may not be the highest priority on the list to make things work.

You may frown upon someone that uses Midjourney or similar tools to generate Star Wars scenes where Qui-Gon Jinn talks to the ghost of Han Solo in the past or generate fake Titanic II trailers and upload them to YouTube. And there’s the danger of creating deepfakes that impact people’s reputations. All these seem to be problematic because they are based on a simulation of creativity and self-production. However, if you used an AI tool to generate a video that may give you an idea how people would evacuate a building in case of a fire, that is outside the realm of trying to simulate you’re better than you are generating videos. Of course there are a lot of shades of gray: What about someone who writes their scripts, but then only uses AI tools as part of their production? I have seen impressive work out there that you just know that even though it is AI generated, it’s original, and required lots of thought, mistakes, trial and error, etc., to become a final project. In that sense that would be that different as the usage of CGI in a movie like Toy Story, which was entirely made digitally, instead of using stop-motion, but nobody in their sane mind would criticize that movie on those grounds, would they?

So, in my opinion, the red line lies in the simulation of abilities, both when it comes to non-personal interactions with AI to reach a goal or, more critically, when people use it a their substitute for real friendships or a real therapist. The big problem is that the companies behind the popular chatbots seem eager to drive us into the dangerous territory of anthropomorphizing these AI assistants. That goes beyond a “new tool,” but into a soul-crushing dystopian take-over of humanity by greedy companies who want to exploit and farm us to feed their profit-making machines. So the moral imperative of releasing the source code for these pieces of technology stands.

Personally, I would say that it would help us all to keep a DIY mindset as much as we can when it comes to creating. OK, if you’re developing a web app, you wouldn’t start writing a whole new web engine from scratch, but you’d start from the abstraction level you chose from. Delegation isn’t the problem. Just don’t ask an automated tool like an AI assistant to do it all for you and then claim that is “yours.” Make mistakes, try out things, and be skeptical of any technology you use.

And no, your favorite AI assistant isn’t your friend or your therapist, as compelling as their “advice” and “empathetic” they seem to be. That is precisely their biggest flaw. I know it hurts sometimes, but people being different, and sometimes pushing you back a little (with love and respect, that is) is something that makes us better people in general. Learning to respect the limits and desires of others and also our own, understanding that conflict is not necessarily abusive, trusting our own ability to sort things out, and trusting our community building are skills that make us stronger against the attacks of the powerful and makes us all stronger with each other. That’s why we should fight against AI wanting to pass as our human friend and also be conscious about how we use it as a tool.


  1. I won’t advertise my IG account publicly, because I exclusively use it to “keep in touch” with my offline acquaintances. However, I’ve noticed that IG is becoming increasingly worse for that task; so I’ve slowly come back to using IM apps for that. Additionally I do have a LinkedIn profile, but I only use it for job hunting; I may have posted something there in the past, I guess… and I may have interacted with someone else’s posts too… but I hardly use it as “social media;” it’s just a place to find jobs ads for me, and also increasingly useless every day that passes. ↩︎

  2. That’s not their fault. IG deceitfully markets itself as a tool to reach out to potential customers, readers, etc. It is not. ↩︎