My purchase of an arduino kit at the end of highschool. This has essentially passively introduced me to a lot of basic electronics over the years without explicitly studying them. And so now I sometimes think "I want to measure my heart rate" or "I want to build a DIY custom keyboard" or "I want a physical pomodoro timer with just one button and 3 LEDs" and I can just order some parts, build the thing, and have a new tool that solves a simple problem.
I sometimes try to recommend other people build something simple with electronics occasionally, only to realize that they don't even have any kind of microcontroller. Whereas for me it has become nearly as primitive an action as 'make a simple bash/python script for this.' Having the ability to produce electronics has opened up a multitude of solutions that I didn't even quite realize until I noticed other people getting stuck without this capacity.
The mere presence of electronics in my life encouraged acquiring many other small pieces of knowledge such as what a diode is and why it is useful. What a transistor actually is (which I had theoretically learned in college, but when I needed to make an electrically controlled switch 'transistor' did not come to mind as a thing I could buy at the store). This also led to some skills like learning to solder and desolder, and learning to use a 3D printer. Which also was a massive boon that deserves its own answer.
Two years later, there are now whole brain wide recordings on C. Elegans via calcium imaging. This includes models apparently at least partially predictive of behavior and analysis of individual neuron contributions to behavior.
If you want the "brain-wide recordings and accompanying behavioral data" you can apparently download them here!
It is very exciting to finally have measurements for this. I still need to do more than skim the paper though. While reading it, here are the questions on my mind:
* What are the simplest individual neuron models that properly replicates each measured neuron-activation? (There are different cell types so take that into account too)
* If you run those individually measurement-validated neuron models forward in time, do they collectively produce the large scale behavior seen?
* If not, why not? What's necessary?
* Are these calcium imaging measurements sufficient to construct the above? (Assume individualized connectomes per-worm are gathered prior instead of using averages across population)
* If not, what else is necessary?
* And if it is sufficient, how do you construct the model parameters from the measurements?
* Can we now measure and falsify our models of individual neuron learning?
* If we need something else, what is that something?
Edit: apparently Gwern is slightly ahead of me and pointed at Andrew Leifer whose group an entire year ago who produced a functional atlas of C Elegans that also included calcium imaging. Which I'd just totally missed. One missing element is extrasynaptic signaling, which apparently has a large impact on C Elegans behavior. So in order to predict neuron behavior you need to attend to those as well.
I expanded 'shocked at failure' into:
The plans you make work.
When they fail, it is because of one of the following reasons:
When they fail for reasons other than these, you are extremely surprised and can point to exactly what about your worldview and anticipations misled you.
I first tried to describe rationality piece by piece, but realized that just comes out as something like: "Enumerate all the principles, fundamentals, and ideas you can think of and find about effective thinking and action. Master all of them. More thoroughly and systematically apply them to every aspect of your life. Use the strongest to solve its most relevant problem. Find their limits. Be unsatisfied. Create new principles, fundamentals, and ideas to master. Become strong and healthy in all ways. "
Non-meta attempt:
<Epistemic status: I would predict most of these are wrong. In fact, I rather recently proved I didn't understand fundamental parts of The Sequences. So I know that my beliefs here are weak and thoroughly misled. So my basis of belief for all of these is broken and weak. I am certain my foundation for beliefs is wrong even if all of my actual beliefs here turn out to be basically accurate. I cannot thoroughly justify why they are right.>
General strategy: collect all the important things you think are true, and consider what it means for each to be false.
Starting with a list of the things most important to you, state the most uncontroversial and obvious facts about how those work and why that is the case. Now assume the basic facts about the things most important to you are wrong. The impossible is easy. The probable is actually not true. Your assumptions do not lead to their conclusions. The assumptions are also false. You don't want the conclusions to be true anyway. The things that you know work, work based on principles other than what you thought. Most of your information about those phenomena is maliciously and systematically corrupted, and all of it is based on wrong thinking. Your very conceptions of the ideas are set up to distort your thinking on this subject.
What if my accepted ideas of civilizational progress are wrong? What if instead of exponential growth, you can basically just skip to the end? Moore's Law is actually just complacency. You can, at any point, write down the most powerful and correct version of any part of civilization. You can also write down what needs to happen to get there. You can do this without actually performing any research and development in between, or even making prototypes. You don't need an AGI to do this for you. Your brain and its contents right now are sufficient. You just need to organize them differently. In fact, you already know how to do this. You're tripping over this ability repeatedly, overlooking the capability to solve everything you care about because you regard it as trash, some useless idea, or even a bad plan. You've buried it alongside the garbage of your mind. You're not actually looking at what is in your head and how it can be used. Even if it feels like you are. Even if you're already investing all your resources in 'trying.' It is possible, easy even. You're just doing it wrong in an obvious way you refuse to recognize. Probably because you don't actually want what you feel, think, and say you do. You already know why you're lying to yourself about this.
You can't build AGI without understanding what it'll do first, so AI safety as a separate field is actually not even necessary or especially valuable. You can't even get started with the tech that really matters until you've laid out what is going to happen in advance. That tech can also only be used for good ends. Also, AGI is impossible to build in the first place. Rationality is bunk and contains more traps than valuable thinking techniques. MIRI is totally wrong about AI safety and is functionally incapable of coming anywhere close to what is necessary to align superintelligences. Even over a hundred years it will be mechanically unable to self-correct. CFAR is just very good at making you feel like rationality is being taught. They, don't understand even the basics of rationality in the first place. Instead they're just very good at convincing good people to give them money, and everyone including themselves that this is okay. Also, it is okay. Because morality is actually about making you feel like good things are happening, not actually making good things happen. We actually care about the symbol, not the substance.
That rationality cannot, even in its highest principles of telling you how to overcome itself, actually lead you to something better. To that higher unnamed thing which is obviously better once you're there. There is, in fact, actually no rationality technique for making it easier to invent the next rationality. Or for uncovering the principles it is missing. Even the fact of knowing there are missing principles you must look for when your tools shatter is orthogonal to resolving the problem. It does not help you. Analogously there is no science experiment for inventing rationality. You cannot build an ad-hoc house whose shape is engineering. If it somehow happens, it will be because of something other than the principles you thought you were using. You can keep running science experiments about rationality-like-things and eventually get an interesting output, but the reason it will work is because of something like brute force random search.
That the singularity won't happen. Exponential growth already ended. But we also won't destroy ourselves for not being able to stop X-risk. In fact, X-risk is a laughable idea. Humans will survive no matter what happens. It is impossible to actually extinguish the species. S-risk is also crazy, it is okay for uncountable humans to suffer forever, because immense suffering is orthogonal to good/bad. What we actually value has nothing to do with humans and conscious experience at all, actually.
Hopefully, you came up with at least 100 bugs; I came up with 142.
I wrote 20,000 words from these prompts. Not all of those bugs, but also my reactions to them. Ended up doing not much else for about three days, but I went over basically my entire life top to bottom. I now have a thorough overview of my errors. I stopped not because I ran out of things I think I need to fix, but because I realized the list would never end. I was still finding MAJOR areas I need to improve even after all that. I see why the exercise is supposed to only be half an hour now: there are about 200 million insects per person!
Lesson learned: sample, not catalog.
I've only taken a really basic economics course, but found the explanations really straight forward and learned a lot. So I don't think the topic is as hard to parse as you'd think.
(Alternatively, I may have misunderstood details, overlooked problems, and simply don't have anything to contrast these statements to. This would make it harder to judge.)
The bank's persona did however fall flat repeatedly and could have been a lot better by having realistic responses.
High upvote low reply is less bad, but still feels like it is fundamentally broken in some way. Failing to leave a mark maybe? I think I would mostly be confused given such a reaction. There might be specific types of posts that would generate that, but I feel those qualities do not generalize to the set of "authoritative, well researched and obviously correct" posts.
Moreover, why should there be discussion? If a post is authoritative, well researched and obviously correct, then the only thing to do is upvote it and move on. A lengthy discussion thread is a sign that either the post is either unclear, incorrect, or has mindkilled its readers.
Alternatively, a length discussion could be a sign that the post inspired connections to related topics and events. Additionally, it may have made a critical advance that furthered understanding of the topic for other people. Even though optimizing for engagement yields divergence from what we want doesn't mean we should optimize against engagement. Or that a lack of engagement is somehow good.
If there are a bunch of long-form articles for which the only reasonable response is, “Yep, that’s all true. Good article!” that’s a win condition.
I do believe there is a place for that, but were I to repeatedly make posts which were so thoroughly correct and got essentially no engagement I would take that as a sign that people weren't really interested in it and that I should focus on other topics that have a larger impact.
Took buproprion for years and while it did help with executive function, I was also half-insane that entire time (literal years from like 2015 to 2021). I guess it was hypomania? And to expand on 'half-insane' - one aspect of what I mean is was far too willing to accept ideas on a dime, and accepted background assumptions conspiracy theories made while only questioning their explicit claims. Misinformation feels like information! Overall there was a lack of sense of grounding to base conclusions on in the first place. I will note this still describes me somewhat, but not nearly as bad. Although it is a bit hard to pin down how much of that was a lack of tools and knowledge, a lot of it was an inability to calm down and rest. A brain constantly on the edge of exhaustion and constantly trying to push is in no state to think coherently.
Buproprion also made my anxiety significantly worse - I attribute most of the panic attacks in my life to it. But all this was very hard to notice due to college stress, and after taking it long enough I had just just attributed it to my base personality + existential despair from learning AI risk.
My overall positive experience from it was that it felt like a stronger caffeine.
What ultimately helped depression (not cured but way improved) was
* transitioning to female (estrogen in particular has strong positive effects for me within hours, but only when taken via the buccal or sublingual route instead of orally)
* stopping buproprion - was frankly not good for my brain for multiple reasons (some listed here)
* adderall to treat my (unknown to me until ~2022) ADHD
* graduating college and then not having constant stress from college or work deadlines
* learning to genuinely rest and enjoy doing nothing (stopping buproprion helped a lot with this).
* not constantly trying to come up with ideas and write expansions of them (this behavior mostly stopped when buproprion stopped as well, actually)
* eating better (beef in particular is extremely important for some unknown reason)
* doing physical therapy to fix upper and lower cross syndrome (took a long time to identify) - sleep is better, less constant muscle tension while laying down
* working less than 20 hours a week. (More than that isn't sustainable for me)
* letting my activity be primarily driven by projects shaped like dopamine trails that spawn further dopamine trails instead of todo lists and dependency trees. Where I define 95% of what needs to happen in the moment as a reaction to the shiny thing in front of me - just one more interesting idea to implement this one tiny thing. Contrasted to the next awful task being handed down from various bigger todo lists.
* 4 days totally off for every ~8 of work (2 days off is never restful and I have multi-day momentum where I don't want to stop working on projects)
* immense sense of calm safety while cuddling girlfriend (decreases my anxiety an absurd degree)
Also dropping the autistic masking. I didn't think I did any of this since I'd known I was autistic since gradeschool, and thought I'd actively fought anything shaped like 'being normal'. The kind of masking was people pleasing - I hadn't even realized I was doing it so hard. It was completely and utterly out of control. I would simulate conversation trees to notice what things I might say that would induce stress in people, and then explicitly avoid saying those things later. I was unable to intentionally choose to induce stress in another person, and as it turns out that is a massive liability in fact. Because it means anything shaped like being slightly mean on purpose in your personality gets implicitly erased. Which is in fact traumatizing. Or any needs you have that require causing someone a bit of stress just don't get met. It requires an unending quantity of input energy to accomplish, more and more as you get better at noticing what induces stress and contorting to avoid it. Never intentionally doing harm is completely untenable. It is an utterly unrealistic standard to hold oneself to. One has to intentionally induce some number of harms one is aware of causing beforehand.
But in doing so there's suddenly room to breathe and live.