mako yass

Wikitag Contributions

Comments

Sorted by

I actually don't think we'd have those reporting biases.

Though I think that might be trivially true; if someone is part of a community, they're not going to be able or willing to hide their psychosis diagnosis from it. If someone felt a need to hide something like that from a community, they would not really be part of that community.

A nice articulation on false intellectual fences

Perhaps the deepest lesson that I've learned in the last ten years is that there can be this seeming consensus, these things that everyone knows that seem sort of wise, seem like they're common sense, but really they're just kind of herding behaviour masquerading as maturity and sophistication, and when you've seen how the consensus can change overnight, when you've seen it happen a number of times, eventually you just start saying nope

Dario Amodei

mako yass*128

I think there are probably reporting bias and demographic selection effects going on too:

  • It's a very transparent community, when someone has a mental break everyone talks about it
    • And we talk about it more than a normal community would because a rationality community is going to find overwhelming physiologically induced irrationality interesting.
  • Relatedly, a community that recognizes that bias is difficult to overcome/but can be overcome as a result of recognizing it will also normalize recognizing it. We tend to celebrate admissions of failure more than most communities. So schizophrenics might be less ashamed to confess to these things having happened.
  • The trainings may give those disposed to psychosis a sense of overconfidence in their rationality that leads them to delay the pursuit of treatment, leading to worse/more surprising breaks when they happen.
  • For vague aesthetic reasons people with a preexisting disposition to psychosis may be more likely to come here: Rationality training, if it exists, is something they'd want, and they'd appreciate the kinds of people who're willing to reality check them.

but I didn't actually notice any psychological changes at all.

People experience significant psychological changes from like, listening to music, or eating different food than usual, or exercising differently, so I'm going to guess that if you're reporting nothing after a hormone replacement you're probably mostly just not as attentive to these kinds of changes as cube_flipper is, which is pretty likely a-priori given that noticing that kind of change is cube_flipper's main occupation. Cube_flipper is like, a wine connoisseur but instead of wine it's perceptual shifts. Their language may sometimes sound odd or exaggerated, until you try the wine again while bearing it in mind, and then you'll see what they were getting at.

It's surprisingly easy to overlook this kind of shift, too. I can absolutely imagine a person getting this unflatness sensation and then just never finding the language to describe it before they totally forget how strangely flat the world used to feel.

At some point I'm gonna argue that this is a natural dutch book on CDT. (FDT wouldn't fall for this)

I have a theory that the contemporary practice of curry with rice represents a counterfeit yearning for high meat with maggots. I wonder if high meat has what our gut biomes are missing.

I'm not sure what's going on here. It's not as though avoiding saying the word "sycophancy" would make ChatGPT any less sycophantic.

My guess would be they did something that does make o4 less sycophantic, but it had this side effect, because they don't know how to target the quality of sycophancy without accidentally targeting the word.

More defense of privacy from vitalik https://vitalik.eth.limo/general/2025/04/14/privacy.html

But he still doesn't explain why chaos is bad here. (it's bad because it precludes design, or choice, giving us instead the molochean default)

With my cohabitive games (games about negotiation/fragile peace), yeah, I've been looking for a very specific kind of playtester.

The ideal playtesters/critics... I can see them so clearly.

One would be a mischievous but warmhearted man who had lived through many conflicts and resolutions of conflicts, he sees the game's teachings as ranging from trivial to naive, and so he has much to contribute to it. The other playtester would be a frail idealist who has lived a life in pursuit of a rigid, tragically unattainable conception of justice, begging a cruel paradox that I don't yet know how to untie for them, to whom the game would have much to give. It's my belief that if these two people played a game of OW v0.1, then OW 1.0 would immediately manifest and ship itself.

mako yass*20

Can you expand on this, or anyone else want to weigh in?

Just came across a datapoint, from a talk about generalizing industrial optimization processes, a note about increasing reward over time to compensate for low-hanging fruit exhaustion.

This is the kind of thing I was expecting to see.

Though, and although I'm not sure I fully understand the formula, I think it's quite unlikely that it would give rise to a superlinear U. And on reflection, increasing the reward in a superlinear way seems like it could have some advantages but would mostly be outweighed by the system learning to delay finding a solution.

Though we should also note that there isn't a linear relationship between delay and resources. Increasing returns to scale are common in industrial systems, as scale increases by one unit, the amount that can be done in a given unit of time increases by more than one unit, so a linear utility increase for problems that take longer to solve, may translate to a superlinear utility for increased resources.

So I'm not sure what to make of this.

Load More
OSZAR »