Thursday, December 5, 2013

What Humans Do Best - Part 4

I should summarize things a little at this point so I can show the point toward which I'm driving with the earlier sections.
  1. Initially I focused upon an ideal optimization problem for resource allocation in family groups because there is a class of problems people resist having automated and I run into this resistance at work and other social settings. In this class is the problem of assigning resources that could impact our children's health, education, and other necessities. In a theoretical sense this problem is solvable with linear algebra, but for technical reasons it is only partially solvable. The issue is the family planner doesn't have the information they need to do a precise job, but they know that isn't strictly necessary and solve a simplified problem and that is usually good enough. 
  2. Next I focused upon the simplified problem in more detail using an example and showed how we further reduce it in practice. This reduction is necessary because the simplified problem isn't really solvable and the people who depend on the family planner know it and resent solutions that do not favor them. In that resentment the planner loses even more information. It also hints at a couple of mathematical issues that go beyond the limits of knowledge too. 
  3. The last part focuses upon the mathematics issues to show even the simplified, reduced problem involves delegation and breaking it up into smaller problems for each planner. It then finishes by turning the problem around to show that a planner winds up serving an elite group and really can't operate any other way. As the problem is reduced and subdivided each elite becomes smaller and better served, but at the cost of control and efficiency.
In this section I want to knit these back together by pointing out that our natural inclination is to subdivide resource allocation problems among many planners, thus we really should be looking at what it means to solve N problems (N planners are involved) each juggling Pi resources where i runs through the list of planners. In the limit we might assume N to be the number of individuals in a community, but it could easily be smaller than that if people organize around nuclear or extended families. In that case N would be the number of 'atomic' units that plan and economize resource usage. If the average 'family' size was four people, N would be 25% of the community's population size. In this sense, the atomic planning groups are probably the best working definition of what a 'family' is when it comes to human communities.

Turning the problem around is best done in a step-wise fashion. Let's start by assuming we live in a community with one planner who actually means well by all of us and seriously tries to serve the largest elite they can. There is a way to do this that mollies the people in the corners of the preference space often enough to keep the peace and some leaders employ the strategy. The trick to it is to take a page from game theory and use a bit of randomness in the solution offered over time. If done well, the other players in the game will find it difficult to choose between the strategies they might play in response to the planner. That makes the non-planning players somewhat predictable even if one doesn't know their preferences in detail. To get the most people involved, the planner will target a particular solution and then include a span for each of the variables in the domain. The resulting range of solutions will probably map close to the target solution, but only if we can make the usual assumptions about continuity and derivative smoothness. If we can do that, the solution range might be an n-ball around the target solution on first approximation where the ball is made as large as planner can manage in the practical sense.

Now consider what happens if the planner fails to mollify a group of people and they rebel successfully and choose their own planner. The first planner need no longer include them and might shift their targeted range to another region of the solution space. The second planner will target something closer to the elite who chose them. The result is two n-balls with a moderate possibility of overlap since each planner might still want to steal people back to their group and might try for those who were barely motivated to rebel. The two groups will otherwise operate as separate communities each optimizing resources as best they can, but without the power to force the other to make certain choices. More people will feel included in an elite group and more people will be covered by a randomizing strategy that keeps them somewhat mollified. The cost, however, is there are now two solutions that might not be reconcilable and the two communities might fight over scarce resources. History is full of examples where this has happened, but it is also full of alternate examples where the communities found an alternate path that avoided the fight. It is the alternate path we should consider in detail, but it too is a form of optimization.

If two planning communities are roughly equally matched in a potential fight, it doesn't take a genius planner to realize that a fight will destroy more resources than it might gain. In other words, there is a cost to violence. The alternatives to this cost, therefore, are obvious. One can simply do nothing and avoid the fight; one could try to steal resources and get them to defensible positions; or one could offer to trade voluntarily and possibly avoid the anger that comes with the other two alternates. Doing nothing isn't uncommon. We call it self-sufficiency most of the time. Theft isn't uncommon either, but it leads to a negative-sum game where planners must devote resources to defenses and that makes it unsustainable over the long haul. Violence usually follows when defense costs get high enough to justify war costs as a replacement.

The winner, though, is voluntary trade as that can lead to a positive sum game and a form of optimization where control is less than complete. If two groups can trade resources they get at least some of what they want, but will have difficulty knowing in advance how much they will have to give up to get it. Instead of not knowing what resources to assign and to whom, they will also not know how much they will have to trade away to get the ones they think they are missing in their plan. What they will know, though, is that they can acquire a resource if they are willing to pay enough and that means they can set up their own randomizing strategy between self-sufficiency and trade in a particular resource. In that way, they can moderate the price they pay by putting a ceiling on it at the cost of some other resource they might develop instead. Their optimization problem turns from finding point solutions to the finding of solution space regions, but they were already doing that because each planner was motivated to maintain the power base they needed to keep their job. In this sense, the simplification and reduction of the original optimization problem leads naturally to the trade solution when atomic planning groups break up. Markets should be expected in human societies involving multiple planners.

Finally, lets consider what happens if communities break up into more than two groups. When N groups all vie for resources, the costs of fighting are even more difficult to bear as they are often a larger fraction of the available resources for a small group than for a larger group. In the limit of individuals, they simply don't have the wealth necessary to fight for all their preferences when facing N-1 other planners unless the starting conditions are seriously out of balance. If they are that far out, it would make some sense for the N-1 others to steal as rapidly as they can to force something closer to a balance or force the single player to devote their wealth to defense and wait them out much like one would when employing siege tactics. Absent large imbalances, though, small planning groups are best served by markets if the markets function transparently enough to make their optimization problems moderately predictable. A market that isn't transparent isn't motivating as it doesn't solve their coordination problem with other groups leaving them with the original options of doing nothing, stealing, or violence. Transparency in a voluntary market is the minimum necessary requirement.

I've been purposely ambiguous when I use the term market because humanity has more than one kind of market it employs for trade and each uses a different coordination technique and exchange medium. Some of the market types are truly ancient and barely recognizable to those who think only in terms of money and barter. Others are so new they too are barely recognizable, yet they are changing the world so rapidly with new voluntary trade that we are left dizzy after our heads are spun around. I'll finish up next time with my views on each of the markets I can see and leave open the question of whether there are more.
Post a Comment