Thinking about this thread on "How could I have thought of that faster?". In practice, I noticed the phrasing doesn't work well for me and I prefer "How could I have seen this faster"? I especially like this framing for when I have identified a "blindspot", "developmental milestone" or noticing someone's wizard power is clearly hinting at a powerful learnable skill or concept I wasn't aware of and that I don't yet possess.
I feel this frame helps me to better find areas where there are cached beliefs to correct, and remember hints at the shape of the thing in the past. I do feel like I have an easier time correcting object-level believes, compared to finding good ways to hone heuristics about my own thinking and how I could avoid some types of "going in circles". I find it hard to account for hindsight, and also memory can be trickery.
Looking back at the original thread:
I currently don't have a fast perceptual view like that, even though I think I understand the concepts mentioned above. I don't feel like thinking of my mind as an engine helps me in post-mortems. So either I lack a skill here or this was hinting at a point I'd find elementary.
Funnily enough, cluncs and the do operator are easily implemented in nix (Merging attribute sets is quite important in nix). Only problem is that nix doesn't support even the most basic numeric operations (like sqrt, pow (so I gave up on implementing your causal examples)).
let
do = model: overrides: let self = model (self // overrides); in self;
quad = self: {
x = 4;
constant = 3;
linear = 2 * self.x + self.constant;
result = self.x * self.x + self.linear;
};
quadDefault = do quad { };
quad2 = do quad { x = 2; };
quadNoLinear = do quad { linear = 0; };
in {
default = quadDefault.result;
overridden = quad2.result;
noLinear = quadNoLinear.result;
}
It is time that I apply the principle of more dakka and just start writing (or rather publishing) more. I know deliberate practice works for writing. If you want to be good at something, you need to do it badly and then just keep going with doing the thing is very common advice. I still find it very hard to do.
It's hard to decide what to write about
I get anxiety about writing something that is not good enough
Topics that go into my head about what to write about revolve mostly about how it is hard to write.
Some of the writers that I admire would see it as below their standard to write a post like that? Lots of unproductive self-reinforcing status anxiety follows.
Perfectionism around writing and what I can do on the platform that I am writing on.
The real pain: All the small decisions at the end. "Do I also post this on my blog or is it too much work?", deciding to try and in the worst case I can eliminate that part of the process tomorrow. Noticing those annoyances could also be added to the post, then noticing how this is taking over the post
At the end, I fell for the trap of editing again (I could have been done in 1h, and now it is 1h 46 minutes).
That is a high false positive and false negative rate, but not fatally so if we get more coverage in return.
Actually, if we are willing to tradeoff coverage for higher probability of false negative and false positives, bisulfite sequencing might also be on the table again, because you can just run the reaction for shorter which would keep more DNA intact (presumably with some kind of bias that might not be worth the headache).
why do stimulants help ADHD? well, they short circuit the part where your brain figures out what priorities to trust based on whether they achieve your true motives
I view taking stimulants more as a move to get the more reflective parts of my brain more power ("getting my taxes done is good, because we need to do it eventually, now is actually a good time, doing my taxes now will be as boring as doing them in the future, rather than playing magic the gathering now") in steering compared to my more primitive "true motives" that tend to be hyperbolicly discounted ("dosing in bed is nice", "washing dishes is boring", "doing taxes is boring"). Maybe I am horrible at self-modelling, but the part where the self model is out of sync as an explanation why the self-reflective parts have less steering power seems unnecessary.
Not sure what's going on, but gpt-4o keeps using its search tool when it shouldn't and telling me about either the weather, or sonic the hedgehog. I couldn't find anything about this online. Are funny things like this happening to anyone else? I checked both my custom instructions and the memory items and nothing there mentions either of these.
Small groups of mammals can already cooperate with each other (wolf's, lions, monkeys etc.). In mammals, I'd guess having a queen gives a bottleneck in how fast there can be off-spring. Also if there are large returns to division of labor in child-rearing, large animals are smart enough that both parents can do this together, while in wasps the males just die (why actually?). So wasps get higher marginal returns when evolving the first steps towards being eusocial. Also smaller animals have more diverse environments and need fewer years to "locked in" eusociality and workers get born without being fertile (eusocial groups where workers are still fertile are really unstable so prone to evolve away from eusociality again when circumstances aren't in favor anymore). Also fathers can't be as sure of their children and the other way around leading to less cooperation if new males join in, which termites overcome by having king and queen, ants just have a queen that stores her sperm, while naked mole rats are just fine with incest?
… that wasn’t enough to learn the pattern, though. Shortly out of college, reality was still hitting me over the head; that time the big idea was an efficient implementation of universal competitively-optimal portfolios. I lost a couple thousand dollars on wildly over-leveraged forex positions.
I am curious what that idea was and where it went wrong.
In that case also consider installing PowerToys and pressing Alt+Space to open applications or files (to avoid unhelpful internet searches etc.).
Yep! When I was in high school I was self-learning physics and was learning calculus via Khan academy, which made me learn that "sometimes", "somehow", you can for example solve a differential equation by getting all the dx and x's on one side and dy's and y's on the other. I was always expecting when I would study this stuff more rigorously in university, this would be explained at some point. To my great disappointment, the mandatory class on this subject for my CS degree "real analysis", did not in fact clear up why this works in the slightest!? I don't know if every real analysis class is this bad (My linear algebra courses were excellent in comparison), but mine was mostly focusing on making students adopt "rigor"[1], presenting definition after definition and theorem after theorem (for short theorems proofs were included, but with zero sub-text what the motivation or idea of the proof was. Complicated proofs were just skipped). Starting studying during Corona and not being able to ask professors questions like that directly certainly didn't help. Somehow I forgot, though, that I never got these questions answered that I had hoped to get an answer to. Also, the quotient in df/dx still confuses me, and it might be due to my confusion about division? Looking at the wikipedia article you link, maybe I should take a look at differential forms?
As my secondary subject for my bachelor, I chose physics, but that mostly involved applying Euler-Lagrange to particular systems. I remember becoming really confused why this can work at this point and noticing that I can't fit the explanation in my head (I am in fact still confused and would love if you can point me to useful resources!). I remember thinking why can you differentiate with respect to speed and treat acceleration etc. as not dependent on speed, weird. Clearly, when framing them all as a function of time, it made no sense (and still doesn't). I was looking it up on Wikipedia and apparently "Differential Geometry" was the answer to my questions, so I got 2–3 textbooks on Differential Geometry (for physics), but just skimming, I couldn't find something that made this click.
Probably I should have looked more aggressively for people to tutor me. Now with language models it feels worth taking another stab at this though. For example when I told GPT4-o the rough areas I am confused about (link to full chat): there are these people talking about scaling laws in the context of "scaling laws" in engineering, and it seems like I might be missing some prerequisites, because I've bounced off this topic a few times even though this seems extremely interesting. Which led it to introduce dimensionless analysis to me, and then I asked more questions and ultimately gave me some textbook recommendations: "Street fighting mathematics", I had already heard praise for in "Biology by the numbers", so then I checked it out and discovered this gold.
Another question I am still confused by: how does your choice of units affect what types of dimensionless quantities you discover? Why do we have Ampere as a fundamental unit instead of just M,L,T? What do I lose? What do I lose if I reduce the number of dimensions even further? Are there other units that would be worth adding under some circumstances? What makes this non-arbitrary? Why is Temperature a different unit from Energy?
Also, I notice all the formalizations of dimensional analysis that Terence Tao mentions here strike me as ugly? Is that me being stupid, thinking that there must exist something nicer?
Which I found mostly annoying and trivial, but made me glade I had spent ~60h before university just practicing proving things. How to proove it is a great book! ↩︎