Human Flourishing

There’s a thought experiment in religious thought: What would you change if you were God? If you try it, you’ll notice that it’s very difficult to say anything, since any suggestions runs into the issue of undesired and unforeseen consequences. This doesn’t mean there isn’t suffering in the world, but we should see it’s difficult to completely solve, and perhaps impossible for any particular problem immediately.

We try to solve a lot of things in our society: social media, AI, the Internet, robotics, are all attempts at solving the problem of the limitations of the natural world. While they reduce limitations, each one faces problems:

1. Can you really enhance social interactions with algorithms?
2. What fails when you try to make an intelligence from scratch?
3. Can a billion people really connect to each other over twisted-pair?
4. Why do we always prefer hand-made over factory-made?

I suggest that there’s an issue with complex and large systems, particularly artificial ones: they tend towards standardization, fungibility, and inflexibility.

Imagine if someone asked you to rebuild a government with a completely different set of politicians. It’s really hard, so you might choose to have a democratic process like the one that was hopefully there before. But fundamentally, it’s unlikely to be successful, since politics is already optimized, politicians already have deep knowledge, influence, networks, relationships, and understanding of the system.

I notice that as I write this, I keep posing thought experiments, and it’s intentional, of course, and well-written. But it points to something, the fact that optimized systems chase metrics, and the emergent solutions to optimization problems aren’t going to be optimal for the metrics we don’t already see, or can’t yet define.

So, we should always design systems with the base idea that they need to improve human flourishing, beyond any metric. We should understand that there’s no easy route to success, and that relentless optimization and growth pose as many problems as they do solutions. Maybe we should be ready to accept that an imperfect solution that’s designed with care can be better, even if it doesn’t produce the same results according to predefined metrics.

The systems I mentioned before, from social media to robotics, they all start to see their inputs and outputs as fungible as they grow. When Twitter was small, you could like somebody’s post and they’d know what you mean by that. Now, a like is a standardized measure, and it fuels the algorithms, so it’s more of a currency than a communication. The only way to solve that, is to not care that much about engagement, and just use Twitter for fun.

Human flourishing is incredibly difficult, and probably impossible to define and optimize for. But if we let go of control, and if we build systems that don’t ask for maximal growth each quarter, maybe we’ll find that in not seeking it, we arrive.

Finally, let’s have a concrete example: an apple. The one you buy at the supermarket will never be as good as the one you pluck from a random tree at a random farm, with a random, caring farmer.

You know this to be true, and now I hope we both see a bit more of why that is. Thank you for reading!

—END POST