blog

RSS
  1. We are past the point of no return for model collapse

    Dead Internet Theory has been around much longer than commerically-available Large Language Models. It's a conspiracy theory based on the premise that the Internet itself is basically fake and gay—all content on all web sites is generated by bots in order to command and mind control us like sheep or cattle. Even though I love me some edgelord bullshit, I never thought this was very plausible; the Internet is bigger than centralized social media websites, and prior to the explosion of LLMs, mass hypnosis by AI-generated content never seemed to me like it would scale well. But my, how times have changed.

    Recently, practically everything on every website has the hallmark indicators of AI generation. Social media sites like Reddit have devolved into a battle royale between ragebait and karma-farming prompts. Non-social media sites, too, have become obvious LLM content dumps. These AI garbage sites are referencing, summarizing, and feeding back into each other, and his is actually a huge problem for proponents of generative AI.

    AI companies will sell you the lie that their language models will revolutionize everything—after all, they are trained on the collective works of humanity! Well that's great, up until 2021. After that, the pool of possible training data (aka the Internet) has become so polluted with the AI-generated garbage that it's technically impossible to programmatically exclude all the AI slop from the training set. The problem is that when you train an AI model by feeding its own output back into it, the model degrades rapidly.

    Again, two-bit Silicon Valley hucksters will hand-wave around this problem. They'll buy off entire university research departments. They'll invest in GPU companies, who will reinvest in their same dumb AI companies, who then reinvest in GPU companies in an endless circlejerk. But the collapse of the model is a mathematical inevitability. You can hire an army of so-called data scientists and light billions of dollars of investor capital on fire, but you cannot stop an inevitability. Meanwhile, the Internet will keep breathing.

    Posted 2025-12-09 20:21:22 CST by henriquez. Comments
  2. Install Stable Diffusion in WSL with AMD Radeon ROCm

    Recently released Adrenalin 24.12.1 driver unlocks new AI-related potential!

    Recently when upgrading my AMD Adrenalin driver, a line in the release notes piqued my interest:

    Official support for Windows Subsystem for Linux (WSL 2) enables users with supported hardware to develop with AMD ROCm™ software on a Windows system, eliminating the need for dual boot set ups.

    Historically, AMD ROCm support has been pretty limited compared to NVIDIA CUDA, which has worked in Windows Subsystem for Linux (WSL 2) for awhile. So this new driver seemed like kind of a big deal, and I thought I'd check it out!

    AMD's article is short and sweet. Obviously you'll need the latest AMD Adrenalin Edition GPU driver installed, and also Windows Subsystem for Linux. Microsoft's official documentation is good, and I've gone through my own installation experience here.

    Once you have the amdgpu driver installed, you can run rocminfo to confirm everything is working. You should see output like this:

    *******
    Agent 2
    *******
      Name:                    gfx1100
      Marketing Name:          AMD Radeon RX 7900 XTX
      Vendor Name:             AMD
      Feature:                 KERNEL_DISPATCH
      Profile:                 BASE_PROFILE

    Installing the Stable Diffusion Web UI is also easy. You'll need Python 3.10 and Git installed (sudo apt install python3 git) if you don't already. Then just pick an installation folder and clone the stable-diffusion-webui repository to your local machine: git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git

    Fixing AMD-specific problems in Stable Diffusion

    Once you have the Stable Diffusion code, you should be able to run ./webui-sh to start the Web UI. However, more than likely you'll run into a couple of specific errors that prevent it from starting:

    • Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'

      By default, PyTorch is trying to talk to the NVIDIA CUDA driver. Obviously on an AMD GPU, that's not going to work. Helpfully, this error message tells us how to fix the problem.

    • RuntimeError: "addmm_impl_cpu" not implemented for 'Half'

      I'm not sure if this is a driver bug or what, but apparently half-precision mode isn't working under ROCm. You can fix this by adding --precision full --no-half to your COMMANDLINE_ARGS.

    To fix both problems, simply edit your webui-user.sh file, find and un-comment the line (remove the leading #) with export COMMANDLINE_ARGS, and customize it like so:

    export COMMANDLINE_ARGS="--skip-torch-cuda-test --precision full --no-half"

    Edit webui-user.sh with GNU nano

    Save the file, and now you should be able to ./webui-sh to start the Web UI and begin generating images with your AMD Radeon GPU! Once the Web UI is running, you can open it in your browser by navigating to http://127.0.0.1:7860

    A shiba inu sitting at a table, no methamphetamines

    Posted 2024-12-21 12:02:00 CST by henriquez. 2 comments
  3. successfully reversed time

    johNny wAs oBsEsseD wiTH gOiNg fASt. HELEN Was OBseSSEd wIth GOiNG SlOw. I wENT INfinITy stepS FuRther AnD reveRsed ALL THE way bacK To aLL THe way BACk tO thE beGINninG. IRoNiCaLLY i dON't HaVe A loT Of TimE tO eXPLAiN, AND cERTainly DON't unDERSTaNd All THE afTer-EfFEcts. WErE NeW iTerAtIOnS spAwned, oR WerE TheY ALReady RunniNG? I DON't KNOw, BUt I THouGHt You shOuLd knOw.

    Posted 2024-12-19 00:00:00 CST by henriquez. 1 comment
  4. I quit tech.

    I pulled the RTX 4080 from the server and sold it for ETH. I've had this midlife crisis on a slow burn for a very long time now. Three years ago I said I would drop out and I finally did. My disappearing act is complete.

    Working for tech companies never made me feel good. The tech industry is not making the world a better place. The Internet is mostly destroyed and our top innovators are focused on putting more people out of jobs with their stupid AI language models. It felt good to burn bridges on my way out.

    Luckily I found an infinite money hack so I'm set for life as long as I keep playing the game. I live in your critical infrastructure now, maintaining, advancing, defending. It feels good to do something useful, that helps people. Nobody around me knows what a psycho I really am. I charm, blend in, fade to gray.

    Now I am finally living life on my own terms and it feels good.

    Posted 2024-05-12 14:49:00 CST by henriquez. 2 comments
  5. Stable Diffusion is trippy

    I just started playing around Stable Diffusion, an "AI" image-generation model. It can be run locally via a very nice Web UI, and it generates images based on text prompts, provided you have enough GPU power. (Generally bigger images take longer and consume more GPU memory). I'm just scratching the surface and some of the results are trippy as hell. So far, most of the images are pretty hallucinatory, only vaguely relating to my prompts, and with all the spookiness inherent to a machine trying to replicate something approaching "art." More images after the click to not wreck my bandwidth.

    Read More

    Posted 2023-12-11 12:42:00 CST by henriquez. 1 comment