TYPEONEERROR

2025: Year in Review

Published on

Originally published on Oki Doki’s Notion Site

I finished my yearly review last week. Here’s a slice about my experience this year with AI. I think some folks have found it helpful in considering the 2nd and 3rd order effects of realigning our creative works/worlds around single-entity corporate AI. What role might you hold in its unfolding? You can read the entire review below. Happy new year!

Aurora borealis over a house

Benjamin Borowski’s 2025: Year in Review

okidoki.notion.site

Throughout the year, I was deeply uncomfortable with what I see as a lack of critical sense-making around AI. I could not understand how people I trusted as creative and critical thinkers were transitioning from skepticism to zealotry so quickly, without considering the 2nd and 3rd order effects of giving the keys to knowledge to a handful of corporations.

Part of my decision to take on the contract at Notion was to put myself as close to the metal as possible, to understand the utility of LLMs in technology and creativity. I wanted to see what people/companies were actually doing, how they were talking about it, and whether I could reproduce the incredible artifacts of productivity that were being claimed (personal assistants, therapy replacement, one-shot apps, multi-modal parallel agents, AGI {n} months away, etc.).

I learned a ton about prompting and parallelized agentic development. And in many cases, yeah, I came to understand how useful LLMs are in programming, and how desirable being able to speak into existence solutions is for creators. I also noticed a lot of behavioural use patterns that others didn’t seem to be noticing. And I still lament an ongoing lack of critical analysis.

Yeah, it’s good now. But for what/whose definition of “good”?

I wrote a lot of rants in Apple Notes about AI. Some of them were certifiably unhinged. I published some of them. Mostly, I annoyingly orated to Marie on our daily walks (something I need to get much better about avoiding). I enjoyed defiantly posting on LinkedIn. I don’t think it did much. I decided to stop using AI in public generative contexts (and I think my album art using my own photography was improved substantially because of it) and mostly for being able to understand how it is being sold and integrated—much so I can protect myself from (and educate others about) the ongoing nonconsensual implementation and the harmful directions of our accelerated slide into transhumanism.

I refused to be “anti” in our trainings, instead preferring to show our students how AI tooling actually performed. Once you start using these tools in earnest, you don’t really need to say they’re not performing as sold; I find they highlight their qualities quite efficiently.

One thing I know for sure after an intense year: I completely reject the idea that “AI”—as productized by corporate America—is “inevitable”. I will continue to defiantly demand that AI models must be a public good and that AI companies must be held accountable for the impacts on sense-making (as it pertains to mental health, education, and safety, etc.) as well as remuneration for the theft of immeasurable value from a non-consenting populace.

TL;DR: we must resist the slide into digital feudalism.