Safety and vulnerability.

I just read an amazing research paper on what makes people trust and cooperate. In my previous article, I was talking about how commerce is vulnerable. That you are exposing yourself to uncertainty, that the next person may or may not do the action that creates positive outcomes for you. And yet you proceed with that arrangement because you have trust in them. And thus to carry out arrangements with others, you should be someone who is hanging around vulnerabilities and is trustworthy.

In this paper, they set up an interesting game where:

  • People can gain if they cooperate
  • People can gain if they cheat
  • People will not gain a lot if both cheat or they get cheated on
  • And if they get caught cheating, they would be in a bad position.

And just like real life, you might not know what is the probability of someone cheating and getting caught and punished for it. So if the probability is way too low, someone might just cheat on you, right? And that has implications for you when you are deciding what to do.

What they found is that people are more likely to cooperate with each other if they are convinced that they would be fine even if they get cheated on. It happens in two scenarios:

  1. If what they gain after cheating is good enough.
  2. If someone gets caught, they are fined enough that cheating is not a better option for the cheater than just cooperating.

Basically, a game where heads you win, and if tails, you don’t lose much.

So if you can make someone believe that they would be fine, they would cooperate with you. Like, if someone feels that “Hey, even if you cheat , I would be fine,” That hey heads you win, and even tails you don’t lose much. They will play the game of toss with you. And what makes someone fine depends on what they end up with, even if they get cheated or if the person who cheats gets caught.

Essentially, safety nets make people go forward with vulnerable arrangements because it enables them to trust and have a lower trust threshold. That makes them more willing to take the risk and cooperate. There can also be a situation that even if you are willing to cooperate in a really risky environment, it can put off your partner who is supposed to cooperate with you. That hey, “this guy is taking reckless risk, why should I also do it? Why shouldn’t I just cheat?” Basically, these unsafe environments make people not trust each other even if one party is willing to.

To be trustworthy, so that people can carry out arrangements (ex. commerce) with you. You have to position yourself as someone for whom cheating will be irrational.


The paper explanation by AI:

Trust and Vulnerability by Chris Bidner Ken Jackson August 23, 2012


The Basic Game Setup

Two people are considering a business deal. They can either work together honestly or try to cheat each other. If they both work honestly, they each make a decent profit—let’s say $100. That’s the payoff r = 100.

If they both try to cheat each other (or refuse to cooperate), the deal falls apart and they each only make $1. That’s mutual defection.

But here’s the temptation: if you work honestly while your partner cheats, they can steal your work and make $200. That’s t = 200. Meanwhile, you only make $20 because they sabotaged you. That’s s = 20. So being cheated on is brutal—you get only $20 instead of the $100 you would have made if you both cooperated.

Now, there are courts that can punish cheaters. If a court catches your partner cheating on you, they impose a fine of, say, $150. That’s F = 150.

The Institutional Uncertainty

But here’s the thing: the courts aren’t reliable. Sometimes they catch cheaters, sometimes they don’t. The probability they catch a cheater is θ (theta).

For some types of contracts, courts are really good at enforcement—maybe θ = 0.9 (90% chance they catch you). For other types of deals, courts are useless—maybe θ = 0.1 (only 10% chance).

The problem: Neither you nor your partner knows what θ actually is for this particular deal. You each get a private, noisy signal about it. Your signal might suggest θ is around 0.7. Your partner’s signal might suggest θ is around 0.6. You don’t know what they heard, and they don’t know what you heard.

Why This Matters

Now you have to make a decision: should I cooperate or defect?

Your logic goes like this:

“If I cooperate and my partner also cooperates, I make $100. That’s great.”

“But if I cooperate and my partner defects, I only make $20. That’s terrible.”

“If my partner defects, will they get caught? Well, there’s a θ probability they get caught and pay the $150 fine. But there’s also a (1 − θ) probability they don’t get caught and keep the full $200.”

“So if my partner defects, my expected payoff is $20 with certainty.”

“If my partner cooperates, my payoff is $100 with certainty.”

“So I should cooperate only if I’m confident my partner will cooperate.”

“But how confident should I be? That depends on whether they think it’s worth cooperating.”

“And they will cooperate only if they’re confident I will cooperate.”

“So we have a circular dependency.”

The Signal Problem

Now, how do you figure out whether your partner will cooperate?

You use your signal. Your signal tells you something about θ. If your signal suggests θ is high (courts are strong), then you think:

“If my partner defects, there’s a good chance they’ll get caught. So my partner will probably cooperate. So I should cooperate too.”

If your signal suggests θ is low (courts are weak), you think:

“If my partner defects, there’s a small chance they’ll get caught. They know this too. So they might defect. So maybe I should defect first.”

But here’s the key: your signal is noisy. You don’t know the true θ. Your partner also got a noisy signal. They also don’t know the true θ. And you don’t know what signal they got.

The Equilibrium Threshold

In equilibrium, both players use the same strategy: they pick a threshold. Let’s call it x*.

If your signal is above x: you cooperate.*

If your signal is below x: you defect.*

At exactly x*, you’re indifferent. You’re on the fence.

The paper derives what x* should be. And here’s the shocking part: x depends on s, the payoff when you get cheated.*

The Vulnerability Effect

If s is high (you don’t get hurt much when cheated—maybe you have a strong safety net), then x* is lower. That means you’re willing to cooperate even when your signal suggests the courts are weaker.

Why? Because even if you do get cheated, it’s not the end of the world. You’ll survive. So you’re braver about cooperating.

If s is low (you get absolutely destroyed if cheated), then x* is higher. You only cooperate when your signal strongly suggests the courts are very reliable.

The Feedback Loop

Here’s the really interesting part. Imagine the government improves the social safety net, so s goes from $20 to $50. Now if you get cheated, you lose less.

This lowers x*. You’re willing to cooperate at a lower signal threshold.

But when you cooperate more often, your partner observes that. They think: “If they’re cooperating, they must be confident the courts are reasonably strong. So I should cooperate too.”

So they also lower their threshold and cooperate more.

Which makes you even more confident they’ll cooperate.

Which makes you cooperate even more.

It’s a positive feedback loop. Better social protection → lower threshold → more cooperation → partner sees more cooperation → partner also cooperates more → you cooperate even more.

And all of this happens even though, in equilibrium, nobody actually gets cheated. The cheating never materializes because the mutual uncertainty creates enough caution that both players cooperate when signals are high.

The Key Insight

The paper’s main finding is this: In the baseline model where both players perfectly know θ, vulnerability has zero effect on trust. It doesn’t matter if s is $20 or $50—you cooperate the same amount. This is because in that model, if θ is known to be high enough, you know for sure that cheating won’t happen. So it doesn’t matter how much it would hurt.

But once you introduce realistic uncertainty (noisy private signals), vulnerability becomes crucial. It shapes your threshold. It creates a feedback loop. And it explains why countries with strong social safety nets have higher trust and more economic cooperation.


AI shii over


  • Some problems create exposure to risks
  • Exposure creates vulnerability
  • Vulnerability creates need for coordination
  • Coordination under uncertainty requires trust
  • Trust lowers the action threshold
  • Lower thresholds enable commerce

The business you want to do should be tackling problems that create risks. A lot of problems create no risks. I have a problem right now, my sock has a hole in it, but it does not make any risk for me; I have another pair that I am going to use. I am going to discard this. It created no vulnerability for me because it was not relevant enough. I already had socks, so there was no need to engage in any commerce. So, if you want commerce, go after problems that create risks. In the previous article, I discussed that risks are not only economic but also personal, emotional, and social. One might go after learning a language or creating art because they have to enforce their own identity. The problem of not creating art exposes them to risks of identity collapse for which they would happily pay for art supplies to avoid it. Going back to the sock. If I had zero socks and I was going to visit a Japanese restaurant with my colleagues where I would have to take off my shoes, then hey I need to engage in commerce right now because I am exposed to the risk of looking like a bum to my colleagues.

I think instead of looking for problems, it would be way more healthier to look for vulnerabilities because one vulnerability can lead you to a lot of problems. Problem is a very needle-in-a-haystack kind of thing. Well, vulnerability can be observed easily and can lead you back to the problem or problems. When you see a vulnerability, you’ve found a structural pressure that generates problems across domains. If you saw someone leaving paycheck to paycheck, that vulnerability leaves them stuck in a job, they can’t invest in upskilling, they can’t start a business. That single vulnerability created so many problems that we can solve for them.

The Arts guy who has identity vulnerability if he does not do Art is exposed to real economic and social risk because he might not be able to apply himself. In other areas of life, he might not be able to perform in a job, he might not be able to communicate with people well, he might not be able to form connections. Thus, if we solve for that vulnerability and highlight all the other problems it causes down the line, we can easily sell them our solution. I think this is exactly why luxury advertising is able to command such high prices, because it does not talk about the product but everything ancillary around it. The product is solving a vulnerability, and that vulnerability, if persists, can cause a lot of problems downstream. But talking about the problems or avoidance of those problems or the potential gains from that product can really push the case. Because that way you are truly selling the notion that “heads you win, tails you don’t lose much.”

But I think it would not be great if you apply this approach to something that is used in a really utilitarian way, such as software. People do expect software to just do things and not be something that is life-changing. Especially when the bulk of income of software comes from B2B, not B2C. No one wants to hear a database company talking like a fucking Rolex marketing campaign. Of course, if there is a slowdown in workflow and there is a vulnerability, you can say that your software is able to do this and that, it is helping you create enterprise value. It is boosting employee morale and stuff like that. But still seems kind of dicey. I mean, a lot of software already kind of does that. Stripe talks about the frustration of payments, and a lot of payment applications talk about the frustration. Claude code also talks about frustration and how you can do more. How more is possible. I think it is because B2B software is really operational, and they are talking about other vulnerabilities that emerge from operational vulnerabilities. We have a problem that leads to operational vulnerability and how that operational vulnerability leads to other vulnerabilities that might be emotional identity-based or social.

Real-world problem → Operational vulnerability → Secondary vulnerabilities (emotional, identity, social)


I have previously talked about the business cycle. That something is a business only if it can complete these 5 steps.

  1. Spend money to find work.
  2. Spend money to do the work.
  3. Spend money to deliver the work.
  4. Get paid by the customer.
  5. Reinvest in Step One.

Without step 5, you don’t have a business. It becomes a one-time gig. People might not be able to do step 5 either because they are in some contract, they are not able to find work, or because whatever money they made throughout the steps 1-4 is not enough to put back into step 1. Basically, their customer acquisition cost is too much. And thus it becomes unviable to run the cycle. Thus it always help to have lower cost to find work. That’s why I went on this quest of understanding trust and vulnerabilities. So now we want to understand how can we navigate this business cycle with this knowledge.

Step 5 is hard – that is, customer acquisition is hard because it requires trust under uncertainty, and you have to solve for it under the pressure of the costs of stops 1 to 4. With all these constraints, you have to make them believe that you can do the work, you won’t cheat them, and it’s worth the cost.

Playing around with prices would increase the pressure even more. You should compete by reducing customer vulnerability. Identify the vulnerabilities that your service or product solves. Show the cascade of problems that the vulnerability creates. And make sure your customer feels that with your product or service, heads they win and tails they don’t lose much. This makes them trust you more easily, thus it would naturally require less CAC because customer acquisition cost is just artificially creating touchpoints with your customers till they start trusting you. By highlighting their vulnerabilities, you are revealing the demand that already exists.

Basically, you want to spend as little on step one as possible. Do step two and step three in a sane way. Hopefully step four happens without any problems, so that you can have enough stuff left over to do step five. Essentially, what you get in step 4 after cutting off all the expenses from steps 1, 2, and 3 should be enough to reinvest back into Step 5.


From claude


Example: Slow database queries

  1. Real problem: Queries take 5 seconds instead of 0.5 seconds
  2. Operational vulnerability: Your platform is slow. Features take longer to build. Releases are delayed.
  3. Secondary vulnerabilities:
    • Professional identity: “I’m a slow engineer. My code is inefficient.”
    • Team morale: “We’re stuck on infrastructure instead of shipping cool stuff.”
    • Business: “We’re losing customers to faster competitors. We look outdated.”
    • Career anxiety: “Am I falling behind? Is this a dead-end project?”

A database company could pitch it as:

(Weak): “Our database queries are faster.”

(Strong): “Your engineers are frustrated. They’re spending time waiting on infrastructure instead of building features. Features ship slower. Your product feels slow to users. You’re losing customers to competitors who ship faster. Your engineers feel like they’re working on a slow project. Smart people leave for faster companies. You feel like you’re managing decline instead of building something great.”

“Move to our database. Queries are instant. Engineers ship faster. Your product feels snappy. You attract ambitious engineers. You feel like you’re building something great.”