Deconfusing the heroin wireheading argument

Posted on 2024-05-15

[2024-05-15 Wed]

  1. Goals

    1. Ship a deconfusion artifact that serves as an example of how to do deconfusion

    2. Inside the essay I’ll be using concepts that are relevant to my thinking about these thinsg

    3. Help people who were in my position last year such that they are not as confused

      1. Implicitly, write something that would have accelerated my mental position from back then.

    4. Get some understanding of what a viable personal writing strategy looks like

    5. Also, start building up a portfolio of diverse research and practical artifacts, which can help with legibility

  2. Let’s first describe the thing we are trying to deconfuse here

    1. I’d say that this is somewhat related to the things Alex tries to point at in Reward is not the optimization target - LessWrong 2.0 viewer but not entirely

    2. The heroin wireheading intuition is likely one of multiple disparate conflicting intuitions (or non-conflicting intuitions) underlying the shard theory cluster of beliefs

      1. My intention is not to try to figure them out and then try to debunk them – this is pretty difficult.

      2. What I do want to do is to communicate one succinct confusion I had, that seems central to the intuitions behind shard theoretic approaches to alignment, and deconfuse it

        1. This is central because I have the impression that a lot of people (including me) have found this conflicting intuition convincing enough that they have moved towards investigating shard theory

        2. This seems to have been the case for me too

        3. I moved away from shard theory after trying to investigate my own concrete approaches that leverage shard theory to try to solve alignment, and kept getting stuck on the reflection problem

          1. See Contra shard theory, in the context of the diamond maximizer problem - LessWr…

  3. The heroin-wireheading argument

    1. The heroin-wireheading argument essentially is “Hey, we humans seem to easily avoid taking heroin and therefore wireheading ourselves (because taking heroin is essentially us ‘maxing’ out our ‘reward function’). But according to AIXI-based alignment theory, we would predict that humans would wirehead themselves immediately. What is going on?”

      1. This provides the impetus for people trying to investigate shard theory approaches to alignment

      2. Some people might find the conflicting intuition quite annoying and they might confabulate reasons for one thing being the case or another being the case, and then move on

      3. Others might take a more “explore-reorient-repeat” approach, where they might work on concrete projects that are implicit bets on one or another intuition being right, and as their feelings shift, they’d change the projects they are working on

      4. Yet others might simply be at peace with their intuitions and consider this a ‘mysterious phenomena’ that they would not dissect and use as evidence that “hey maybe alignment could be easy? that’s kinda why I am doing prosaic alignment” even though the main reason they are doing prosaic alignment is because of incentives

      5. Note that all these things are understandable and fine. It is not easy to do deconfusion.

  4. Deconfusing the heroin-wireheading argument

    1. mathematical objects are not physical objects

      1. This is kind of a Platonist or anti-Platonist approach to relating to math objects

      2. The properties of a mathematical object apply to real objects only inasmuch as the real object approximates the mathematical object

        1. See Without fundamental advances, misalignment and catastrophe are the default ou… for an alignment related example

        2. Describe the circle and circular table top example

        3. Maybe describe the heroin wireheading example as a subset of this point, and make this point the main point of the post?

      3. Why is the heroin wireheading example not relevant here?

        1. Well, oh. Okay damn this seems to get us into details about evolution. I don’t think that this can be a post on mathematical objects with heroin wireheading as a sub-example

        2. Anyway. the ‘untread ground’ argument

          1. Evolution basically isn’t really doing much for you in the environment you are in, so all you have to rely on is your brain, and it is easier to model your brain and its reward system as the system itself with a utility function / capabilities only, instead of thinking about reward functions.

            1. One huge source of confusion is in how evolution is described by MIRI people (even though their arguments probably make sense, the way they argue it is quite confusion promoting)

            2. Evolution selects for strategies that are optimal within certain environments. The bundle of strategy and environment is selected for together, and are intertwined.

            3. The gene-focused view of evolution can and will distract one from the core of evolution, which is the natural selection bit, which is not dependent on a specific mechanism of inheritance

              1. I anticipate some MIRI people posting the sequences and stating that this was pointed out in the evolution posts, but I think the use of the term “inclusive genetic fitness” in Nate’s post was an egregious mistake that promoted confusion about this

              2. The drama downstream of the use of that term (see Matthew Barnett’s post, Quintin Pope’s post) is some evidence that it was a really unhelpful way of pointing at the thing Nate wanted to point at

            4. Anyway. Nate’s point still stands, evolution that proceeded through selection over genes seems to have been slow enough that it hasn’t anticipated the situations humans could find themselves in, right now

            5. This is fine in general: natural selection isn’t agentic, and isn’t trying to ‘maximize inclusive genetic fitness’ – tons of species of animals die out as their environment that they were fit for vanishes.

            6. The difference here is that humans have abstract reasoning and have sufficient intelligence to be able to make drastic changes to the environment they are in, such that the environment humans are in right now is incredibly different from the one they were selected to thrive in.

              1. genes are just a mechanism of inheritance, man. Its just a transfer of information. The genes are not the protagonist of the story. Humans are not necessarily the protagonists either, but humans are instantiations of the bundle of strategies that are optimized for certain environments, that is selected for.

            7. If evolution did not anticipate the existence of heroin, and the possibility of humans ‘wireheading’ and then reducing their chances of survival and reproduction, then claiming that the fact that we avoid heroin as evidence that there’s a systematic way that we can build a system that avoids wireheading doesn’t make sense to me.

              1. What is happening here is that we are seeing the extent of our capabilities as humans to anticipate and adapt to our new environment with potentially extremely dangerous threats.

                1. To be fair, this is mainly due to memetic selection at the level of culture. I’d assume that if we shifted our global civilization’s moral norms towards heroin-style hedonism at the expense of every other thing we care about, humanity will go extinct pretty soon.

              2. This doesn’t make any point about ‘alignment’ with respect to evolution – this makes a point about humans believing that they value things other than heroin, anticipating that injecting heroin will mean that they would change in ways they (in the moment) dislike, and therefore avoiding heroin

              3. If you provided heroin (or ideal wireheading stimuli, such as a heroin-fruit) to chimps, they’d go extinct fast

            8. Given this, it seems like you could imagine that humans in the hunter gatherer era caring a lot about having sex is an example of reward maximization.

              1. The ‘reward function’ equivalent here is more like the inverse of what is selected for, given that certain strategy-environment bundles are better at proliferating than others

              2. In fact, perhaps ‘reward function’ as a way of modeling what is going on probably encourages confusion instead of discouraging it

                1. Of course, Alex has tried to point at this in his post Many arguments for AI x-risk are wrong - LessWrong 2.0 viewer (see the first point of “Other clusters of mistakes” section)

                  If I want to consider whether a policy will care about its reinforcement signal, possibly the worst goddamn thing I could call that signal is “reward”! __“Will the AI try to maximize reward?” How is anyone going to think neutrally about that question, without making inappropriate inferences from “rewarding things are desirable”?

                2. I don’t think I agree with Alex’s broader claims though, partially because I think he misinterprets the confusing discourse perpetrated by, for example, Nate’s “sharp left turn” post, with the belief that these people are confused

                  1. This may be the case, or may not be the case. Either way, I think it is a mistake to throw out your intuitions that conflict with ‘more defensible’ intuitions that you can easily argue, that are easily legible, and are backed by millions of dollars of funding in our current AGI research and industry ecosystem

                3. Another note based on reading that post

                  In arguably the foundational technical AI alignment text, Bostrom makes a deeply confused and false claim, and then perfectly anti-predicts what alignment techniques are promising.

                  1. I empathize a lot with Alex’s feelings here, but I think it seems very incorrect

                  2. As far as I can tell, AIXI’s math is correct, and the point being made is that clearly based on the math, the AI is entirely focused on controlling the reward channel. There’s no ‘real world’ or ‘physical’ assumption being made here.

                  3. This matters because at any given point in time, a reward is a sparse and very tiny provider of information about the utility function we wish the AI to have

                  4. Here’s another relevant quote

                    In the most useful and modern RL approaches, “reward” is a tool used to control the strength of parameter updates to the network.[3] It is simply not true that “[RL approaches] typically involve creating a system that seeks to maximize a reward signal.” There is not a single case where we have used RL to train an artificial system which intentionally “seeks to maximize” reward.

                    1. I think the issue here is that Alex is getting confused between the intuitions and words used downstream of reinforcement learning theory (please thank Richard Sutton and whoever else was involved here for this – also see Tsvi’s ‘hermeneutic net for agency’ post about degenerate use of philosophical terms that don’t take into account core notions and intuitions underlying that philosophical term as it is used), and the more philosophical (Bostrom, Marcus Hutter) use of the term

                    2. Sidenote: it is hilarious that I’m still interacting with shard theory, even after all this time

                      1. Shard theory is my white whale.

                    3. In ML, it is correct that “reward” is a tool to control the strength of parameter updates to a network. And once you stop doing parameter updates, then it wouldn’t make sense to imagine wireheading.

                    4. But if you could model this as math, you’d also notice that this degenerate behavior of AIXI, where it seeks to control the reward channel, is something that the math wouldn’t show for these modern ML systems

            9. It doesn’t make sense to claim that because we humans avoid heroin, we are somehow systematically avoiding reward maximization and wireheading

              1. It makes more sense to imagine humans are using their own capabilities to navigate an environment that is likely to destroy them

              2. Its all capabilities, my friend

          2. Thinking about reward functions and then equivocating it with human minds is likely to mess up your reasoning process

            1. But I already talked about this in the Alex Turner focused headings

        3. At a higher level, I think that Alex’s post (Many arguments for AI x-risk are wrong - LessWrong 2.0 viewer) is a cry of despair as to just how difficult deconfusion is

          1. And I agree with that specific sentiment. I am, as of writing, tempted to focus on more empirical approaches to alignment research, because of an inchoate hope that the epistemological challenge is less intense due to the higher frequency of feedback. But I am not yet certain that this results in a higher quality of alignment research progress

          2. The fundamental problem is that deconfusion is incredibly difficult, and using other approaches to figuring out a working solution make sense to the extent that they are as powerful and viable as methods to figure out what the problem is and how to solve it

            1. This doesn’t take into account obvious considerations of empirical skill – writing a working implementation of RSA is a different challenge than creating the theory (and the theoretical scaffolding) for RSA

          3. here’s another quote from the Alex essay, that he quoted from a comment of his, that shows something illustrative

            Reading this post made me more optimistic about alignment and AI. My suspension of disbelief snapped; I realized how vague and bad a lot of these “classic” alignment arguments are, and how many of them are secretly vague analogies and intuitions about evolution.

            1. I think that – or hope that – the way Alex went about resolving this internal conflict (between his intuitions) was one that involved trying to figure out why he felt this way.

            2. It seems more likely that he learned to reject one for another, which I wouldn’t recommend, even if you rejected the ‘less correct’ intuition, because you are throwing away information that would have helped you when you dealt with another such instance of confusion between two conflicting intuitions

              1. Here you are relying on luck to have the right intuition and to be going in the right direction in terms of investigation

            3. It is the case that more empirical approaches to alignment research does provide a lot of data that might shift Alex’s intuition (and in fact I think that to a certain extent he agrees about most of the things I believe, just at a level that they don’t conflict with his intuitions related to ML and shard theory – so his intuitions are associated with extremely powerful systems that can take over the world, for example)

          4. Overall, yes. Deconfusion is hard. I’d like to write more about this in a separate post, but in summary, learning the skills related to doing deconfusion seems extremely difficult to do by yourself, and in general I’d recommend doing so while working with some experienced researcher, so that you get feedback from them and learn their mental skills

            1. Its how I got started trying to learn deconfusion, for example

  5. I can arrange this into the following sections

    1. what the conflicting intuition is

    2. the evolution part

    3. the reward function / mathematical object physical object part

    4. deconfusion is difficult (and notes on alex confusions and my thoughts on deconfusion)