Sponsored By

Cyberspace in the 21st Century: Part Four, Foundations II

Modern operating systems may be relatively straightforward to use, but under the surface they are now huge behemoths requiring huge amounts of manpower to develop and administer. It doesn't have to be this way. A new operating system will arise that avoids the costs associated with the current crop. How are we going to produce a network operating system without the might of an organization such as Microsoft or IBM? The answer takes a leaf out of nature's book, saying "Sod perfection! We'll just go for 'mediocre' and lots of it." Crosbie Fitch explains how this approach suits distibuted network gaming better than traditional alternatives.

Crosbie Fitch, Blogger

December 12, 2000

35 Min Read
Game Developer logo in a gray background | Game Developer

Cyberspace: the next online revolution ā€“- massive multiplayer online games supporting billions of players, simultaneously.

Youā€™ve seen the Web. Almost the entire planet is wired up. Everyone uses e-mail to talk to each other now, and even the telephone has changed. Weā€™re going mobile, even wearable ā€“ other services are arriving such as SMS, WAP, I-Mode, etc. Millions of people even have their own web page. We are all connected as equals in this world and we work and play together in harmony. Itā€™s one big global village and weā€™re going to find ever more sophisticated ways of telling each other stories and having adventures together.

If you thought the five digit figures for players connected to online games was large scale, youā€™re just seeing a MUD on steroids. Wait until you see an online world as large as the web, unfettered by bottlenecks caused by primitive client/server technology.

Everyone is gradually waking up to the fact that for an application to be of this global scale, it must be a distributed system. Consider file and resource sharing facilities such as Napster and MojoNation, tools such as NetZ, and games in development such as Atriarch. People are beginning to see that the future is distributed. Not everyone believes that global scale applications stopped with E-mail and the Web.

So, if you want to move on from small scale online games such as Ultima Online, Everquest and Asheronā€™s Call, toward global scale virtual environments such as those depicted in movies such as The Matrix, Tron, and Dark City, then youā€™re going to have to wrap your brain around the distributed systems approach.

In a nutshell? Well, instead of sharing files as with Napster, youā€™re sharing the latest news concerning the state of the game model. Some of this news is created by each player when they make changes to their environment, but mostly each player subscribes to the news concerning that part of the game world that theyā€™re currently interested in. Each playerā€™s computer processes the portion of the game model they possess locally. It will seem difficult to believe that this can work for real-time information just as much as MP3 files, but then thatā€™s the old ā€˜paradigm shiftā€™ raising its head again.

These are the continuing propositions of Crosbie Fitch and his distributed systems approach to online games

Time for a Revolution

Modern operating systems may be relatively straightforward to use, but under the surface they are now huge behemoths that require years of study and experience to administrate. Developing them requires huge amounts of manpower, which only the largest of organizations can afford.

It doesn't have to be this way. Just as the PC arose to avoid the costs of using, administrating and developing mainframes, a new operating system will arise that avoids the costs associated with the current crop.

In pursuit of this utopian OS let's go back to first principles, but this time we'll embrace the fact that all computers are networked together. We'll also decide that the primary purpose of all these connected computers is to allow the creation and exploration of one or more virtual worlds. This is after all, what everyone spends their lives trying to do -isn't it? - thinking of how much better the world would be, if onlyā€¦



More often than not, achieving a big change requires collaboration and vision. Today that vision may be conveyed by pictures, words, music, even models, and many other things besides. From a pragmatic perspective, we use computers to produce documents, send e-mails, and play games. From a higher perspective, we use computers to create and explore virtual worlds.

In effect, I'm suggesting that we move on from the document-centric paradigm, to a world-centric one. Documents are simply messages that make up a set of transactions between one world state and another. For example, in the virtual world of property and work, a sale consists of documents describing an item of property, proposing a sale, proposing a purchase, and confirming agreement. In one world the item belongs to Fred, and the money to Bill, and in the other world, it is vice versa. Messing about in a variety of different worlds is what we're all really doing, and if computers can help us do it more effectively, so much the better.

But, maybe I shouldn't waste too much of this column by waffling on about such philosophical issues, but get down to the nitty gritty of how you go about making computers world-centric - even if you're not quite ready to give up being document-centric just yet.

Making it Easy for Ourselves

If we're going to do this, how are we going to make things easy for ourselves? How are we going to produce a network operating system without the might of an organization such as Microsoft or IBM? The answer takes a leaf out of nature's book, saying "Sod perfection! We'll just go for 'mediocre' and lots of it," -- i.e. get a billion vaguely functional neurons, slap 'em together any old how and get coherence through parallel processing and redundancy. This has to be better than the alternative of buying half a dozen super-servers and waiting years for legions of coders to figure out how to write perfect software.

The next few sections touch on each of the areas that need to be addressed by a new cyberspace network operating system and list the ways in which we can make our lives easier -- and the final system more effective -- by throwing out difficult problems along the way.

Resource Scalability
Remember, Moore's law tells us that we don't have to worry about CPU or storage resources given that their growth will far exceed the growth in network performance - especially with the comparable growth in traffic expanding to consume bandwidth and so prevent much improvement in latency.

Therefore, if we make any preference for a resource, we make it in favor of communication. You need to be a real scrooge here.

Miserly Communication

There are big problems involved in designing large systems to operate across a network. It's not like a CPU data bus: perfectly synchronized and reliable. The big wide world is a different kettle of fish from a nice, neat, printed circuit board. Across the Internet, you don't know anything about how long a message takes to transmit, whether it'll actually get to its destination, and whether things will get better, worse or stay the same. It's a case of 'bung the letter in the mail and hope'. If you're lucky, you'll get a reply - sometime. Maybe.
You cannot approach systems development in the presence of such vagueness in the same way as you can approach it on a predictable and reliable PC. Unfortunately, people do. The attitude is that you put some higher level layers over the network that hide its unreliability, and then you can simply carry on developing systems as though everything was still perfect. This may not be much of a problem on local area networks, but with the Internet operating on a global scale, large systems simply grind to a halt if they refuse to accept less than 100% consistency.

But, still with that goal of perfection uppermost in their minds, system developers typically, can still only countenance lowering their requirements just enough to allow their large systems to tick over once more.

I'm saying, "No way man, just give it up. Throw the whole idea of consistency and integrity out of the window - you just ain't going to get anywhere until you let go!"
Instead, we just go for best effort. Don't get obsessive about perfection, but contrive a tendency for glitches to become inconsequential. Just like social engineering, don't rely upon people to operate like clockwork, but find a way to make their aggregated inclinations result in a desired behavior.

Minimize Communication
If network communication is such a problem, let's do as little of it as possible. The emphasis of course, being on 'as possible'. We still have to communicate, and every ounce of bandwidth will be precious. And don't think there'll ever be 'enough', because there'll always be a little more realism that can be added. Remember that no system can contain a model of itself to the same fidelity.

Predict
One way of avoiding communication is to predict as much as possible. We only transmit what can't be predicted. It's simply that there's not much point communicating something the recipient can work out for themselves (or something they already know).
In some cases we may be able to communicate corrections to predictions if we know the predictions that others will be making.

Keep network distance small
If latency tends to be worse the further apart computers are on the network, then we should prefer communication with neighbors. Of course, this means we need to measure latency to some extent and use that to determine how much of a neighbor they are.

Prioritize
If we have a limited amount of bandwidth, or there is any other limitation on communication then it makes sense to prioritize messages according to their importance, e.g. based on the value of the information and its need for timeliness.

No Guarantees
There's no such thing as a free guarantee. Guarantees always cost something. We just require that best effort is used. If we need anything further than that then we'll provide it ourselves. So we don't need guarantees for message delivery, message ordering, or message integrity. However, it would be nice to know if a received message is corrupt or not, and what its serial number is, if only to be able to collect some statistics that will give us an idea of how good communication is with a particular party.

Bandwidth is precious
Bandwidth is precious, so let's not waste it or cause bottlenecks in the system. We'll take pains to distribute knowledge and lines of communication so that no single computer becomes too much of a focus. We'll always transmit the highest level representations of state, and utilize compression techniques where appropriate.

For example, if a piece of geometry can be described in a concise manner, such as 'a chair', then it doesn't need to be sent as a 500K Max file. This assumes, of course, that the recipient knows what chairs are and can generate them itself.

So we always deal at as high a level as possible, reverting to lower level descriptions when time permits. Or in terms of prioritization, it's likely that it may often be useful to transmit low levels of detail at a higher priority than higher levels of detail for the same item.

Resource Prudence

Communication channels as our most precious resource. However, whilst storage and processing are apparently inexhaustible in comparison, they are still finite. So, as long as it doesn't compromise communication performance, we should still strive to reduce use of these other resources.

Storage is Finite
While the combined storage of millions of computers may amount to a lot, we should still contrive to share the storage burden evenly. This means minimizing unnecessary duplication, and ensuring that stored representations are fairly compact and high level.

Processing is Finite
Perhaps CPU is the cheapest commodity, but there'll always be extra fidelity to fill idle hands. We should prioritize our modeling workload, processing the more important before the less important. For example, there should be greater fidelity at the player's focus than at their periphery.

What do we End Up With?

We end up with each computer containing a world database that contains a portion of the entire world. Just as in any other game, many of the objects have behaviors, which means processing must be performed in response to various events. However, in order that each computer doesn't end up quickly diverging from any kind of agreement with every other computer's idea of what's going on, there needs to be some kind of synchronization.
As I said earlier, our approach is to let go of perfection, so perhaps 'synchronization' is too strong a word. I'd suggest that 'consensus' may be a little more palatable. Consensus means that each computer needs to participate in a global conversation about a virtual world in order to reach a continuous agreement as to its current and, to all intents and purposes, true state.

All computers are peers, there are no oracles, no central servers that define truth or have direct access to it. We don't have central servers because they cause bottle-necks and become too costly to maintain - they're too big for their own boots.



Instead, all computers participate as peers in constructing a virtual world. There is no absolute truth, no definitive version, merely a consensual reality. If that's good enough for philosophers in the real world, it's good enough for players in the virtual world.

The manner of achieving consensus between millions of computers can be recognized as a distributed system. It's not that there's any inherent merit in distributed systems per se, whether for massive multiplayer games or anything else, it's simply that the system design one arrives at for enabling scalable online games fits the definition of a distributed system quite well. What exactly are we distributing? We're distributing information between distributed resources so that agreement can be maintained. And true to the definition of distributed systems, this is all done transparently, so that each player can believe that they have the entire virtual world spinning in a beige box under their desk.

What to Distribute -- State, Events, Behavior?

What is the consensus about, what is it that we have to agree about? You might conclude that we just needed to agree about the state of the virtual world, and everything else falls out of that. But there are alternatives, and they shouldn't be dismissed out of hand.

There is the matter of that which causes a change in state, and that which governs the causes. These are 'events' and 'behavior' respectively. In the real world, behavior is equivalent to the laws of nature and is hardwired in to the fabric of reality. In the virtual world, behavior consists of a bunch of rules that are typically represented in a scripting language of some sort (if it isn't hardwired into the game code - tut tut) and so becomes part of world state. Events are the causes of behavior, and so events cause a change in state. Because of this, events are often interchangeable with the collection of state changes, i.e. an event is in effect a 'delta'.

There are two approaches to keeping computers in sync, either communicate state or communicate the changes. There is thus a choice between the following two items:

  1. Communicate state changes and determine events locally

  2. Communicate events and determine state changes locally

The thing is, state by its very nature contains a snapshot of reality, whereas events represent changes in reality between one snapshot and the next. Unfortunately, if you miss an event, it's much worse than if you miss a snapshot. In order to maintain perfect state, you can't miss out on any events. If you do, you need to retransmit them (and in the correct order). This is because events have a context and represent a known way of transforming state between one version and the next. They also rely on perfect knowledge of the current state in order to transform it into the next.

We've opted for the 'best effort' way of working so that means we've got to tolerate missing information. It's fairly clear, then, that distributing state is the better road to travel, given that state represents an accumulation of events. We're also encoding behavior as part of world state as it needs distributing too. Because it's part of the state, we don't need to consider it separately.

Distributing State

Each computer knows its local state perfectly, it just doesn't know what the consensus of the current global state is. In simple terms, it needs to compare its state with that of a peer and then receive an update of the difference. The difference is equivalent to the series of events that gave rise to it.

Ah, but, reality is a change in state over time, and I haven't catered for time yet have I? Yes, it's no good to have computers simply converging to a single steady state, they've got to converge to a state that dynamically changes over time. For every element of state there is a corresponding time associated with it, and given the choice between state of one time and another, we'd presumably overwrite the oldest with the latest. But how do we know the difference between a locally computed state and a state locally computed on another computer?

Obtaining Agreement

We have five million computers messing about with their own version of reality and somehow we're going to get them to come to a consensus. It doesn't seem sensible to have any kind of argument about it. After all, how can computers argue? They're all peers anyway. Even if they could argue, it would be a waste of time that we couldn't afford. It sounds like some kind of arbitration is required.

Conventionally this arbitration is obtained by having an arbitrating version of the world located on a server. You tell all the clients "Don't argue - the server is boss. If you've got different ideas, forget them and let the server overrule everything you come up with." Of course, the clients still have to model the world themselves, but defer to the server whenever they get an update as to the correct state of affairs.

We aren't using the client/server approach, so what do we do? We distribute it -- just like everything else. We use distributed arbitration. We analyze the client/server approach and determine the essence of what enables a server to arbitrate state, but divorced from the co-incidence that it's all held in one place. The analysis is simply that for all versions of every item of state, there is a single item version that represents the one true version, and this one true version happens to be held by a particular computer. In client/server systems the server is the 'particular computer' in all cases. However, in a distributed system, we don't locate these true versions on the same computer, we distribute them all over the place. So there is still a 'one true version' of every one thing, but it could be on any computer.

Somehow a computer needs to know whether it's got the one true version, or whether it's simply guessing at it. In the latter case, it also needs to know who has got the one true version, and what the latest news is concerning its value. What we end up distributing is not just 'estimated state', but also news of the 'true state', with more recent news overwriting old news.

Ownership

Another way of looking at distributed arbitration is to think about it in terms of ownership (of state), i.e. if a computer is the owner of an item of state, then it is the one that arbitrates over the true state of that item. All we need to do is keep track of the current owner, and determine who should own what.

By allocating ownership we are determining which computer should have the right to determine the definitive version of the owned state. For any item of state, which computer should hold authority? The principles discussed earlier can be a guide. We want to minimize communication and communication distance. Therefore, we want to keep distributed state closest to those computers that are using it most, or those to which it is of greater importance -- I'll term this 'Interest'. For example, a computer modeling its player's avatar sitting in a vehicle is going to be most interested in the state of the avatar and the vehicle they're in. Computers whose players' avatars are in the vicinity and likely to be affected, will be the next most interested. And so on.

We have to continuously determine which computer is most interested in each item of state, and transfer ownership accordingly. Of course, we have to stop this process from thrashing, so the benefits of ownership have to be balanced against the cost of transferring ownership.

Expressing Interest

As with other kinds of property, those most concerned with its upkeep tend to be the best ones to own it -- or, at least, determine its maintenance. Interest is a measure of relevance of an item of state to the player. Things are more relevant the more they improve the quality of the player's experience, i.e. for good modeling of their avatar's local scenery and its denizens, presenting good images of the avatar's senses, and conveying the player's influence over their avatar.

The player will have most interest in controlling the avatar and seeing through its eyes. And the avatar in turn will have an interest in the scenery, in fact anything that is in range of its senses. It will also have a lesser interest in scenery that it might see shortly on its present heading (if any), and lesser still, scenery that it might see if it took any possible course of action available to it.

A computer will express interest in state according to the needs of the player. How do you express interest? It could simply be a matter of saying "let me know all objects in your database whose bounds intersect with the supplied bounds", or perhaps "all objects that may be visible from this viewing area". Or it could be "let me know all objects where property 'type' equals 'avatar'. An interest is a set of criteria designed to match a set of interesting objects as opposed to uninteresting objects.

And to whom is this interest expressed? Well, you could express it to the current owner of the state. Once you'd located the current owner, you could set up a direct communications link and get updates to the definitive state direct from the owner. Furthermore, if you're more interested in the state than the current owner, you might well transfer ownership.

However, if ownership is continually changing left, right, and center, it's going to be a bit difficult for millions of computers to keep track of who owns what. Plus, one may inadvertently create bottlenecks if a popular item of state ends up in a computer with poor connectivity. There may also be a fair bit of thrashing, if not squabbling, between computers over ownership if several computers share a similar interest in the same item. And what happens if a computer which owns something goes offline? Bit of a problem eh?

There's evidently a need for some discipline and organization here.

In Essence, That's It!

We now have the essence of a scalable networked games engine. Each computer expresses interest in its player's avatar, and to this the avatar in turn adds its own interests. The computer thus amasses all state of interest, and when it processes the behaviors within that state, it will be producing as good an estimate of that portion of the world as every other computer in the same area. For all the state that it owns, it will be computing the definitive state, and of that which it does not own, it will both be computing estimates and receiving overriding, intermittent updates (originating from the owning computers). Everything else is just about fine tuning and making the system stable.

At this point we can make a conjecture as to the kind of result we're going to be ending up with. With a very poor connection things will be a bit jerky, as incoming state overrides are likely to have diverged from local estimates. But, then, it would be difficult to do any better with any other system. The better the connection gets, the less divergent the incoming overrides will be -- given that they can be more frequent and of greater fidelity.

Some will argue that you can't just allow state to be overridden, they'll demand that you must perform some kind of smooth convergence to prevent unsightly discontinuities. Well, there's always scope for tweaking, but I'd say the sooner a computer accepts the 'truth', the less potential there is for escalating divergence between computers. Anyway, the objective here is to be simple and scalable -- we can work on 'complex and scalable' another day. Hey! Some people would be delighted to explore a vast virtual world even if over their 28k modem there was a bit of pop-up or impromptu teleporting. Even so, apart from telling people to get an ADSL adaptor, there are variety of techniques for coping with latency by addressing these within the game's design.

Organizing a Distributed System

How do we get a fairly egalitarian network of millions of computers to organize itself so that each computer always has a fair share of the workload, appropriate to the needs of the players it supports, and that it always has the optimum set of communications relationships with its peers so that it is assured of the most up to date information?

Well, I don't know about you, but whenever I see a system that is organized I see a hierarchically organized system. Yes, perhaps all I have is a hammer and everything I see is a nail -- you may well berate me for my lack of lateral thinking. However, if there is a really neat non-hierarchical way of organizing the distributed system we're ending up with, you'll have to figure it out yourself or read a book to find it. Please, let me know if you do. Until that time, I'll describe the hierarchical way of doing things.

Changing Interests and Responsibility

Because players' avatars will be mobile in the virtual world, their interests in scenery will be continually changing. Hence each computer will be changing its interests, changing the portion of the world that it holds, and forming new relationships with computers that share those interests. We end up with a dynamic or virtual network, one that is not necessarily related to the shape of the physical network. Furthermore, computers will be blinking in and out of this virtual network as players come and go or their little brothers jealously pull the plug.

It's time to introduce another concept. This time it's 'responsibility'. Every computer has a responsibility to do its best in modeling its portion of the virtual world, along with keeping track of what it owns and doesn't own. This is because ownership is a precious commodity granted to a computer to improve its modeling of the things most important to it. It's precious because there can only be one owner out of millions, and ownership is effectively something that is competed for. Furthermore, by being the key mechanism by which world state is arbitrated, ownership is a vital aspect of the system, so we can't afford to lose track of it (continuous broadcasting of ownership details isn't a solution either). We must introduce a rule that if a computer is given ownership of something that it is responsible for this privilege - it has some duties to be performed in maintaining the security of this ownership.

The nature of responsibility is that responsibility is delegated. Ah ha! Here we get to the hierarchical bit, and of course, in any hierarchy there is always someone (or something, a committee, say) ultimately responsible -- someone at the top.

This is how you can think of the transformation from a client/server organization -- with the server owning all of its authoritative model which is being replicated by the clients -- into a distributed organization with the server delegating ownership to clients. So, in a way we still have a client/server system. It's just that now, the server is no longer a bottleneck, because it is highly likely that it will have delegated all ownership until few other computers ever have cause to refer to the server.

Let's just run through this again. The founding computer (perhaps more than one) is the one that starts off owning everything. Then a new computer will connect to this one and perhaps obtain ownership of parts of the world that it's interested in. A subsequent computer will connect to either of these two, perhaps obtaining some ownership. And so it goes on. However, whilst ownership is freely passed on, responsibility is usually just delegated. If delegated, responsibility can only be surrendered back to the computer that delegated it. If responsibility is held at the highest level then it can be transferred.

Perhaps a better analogy would be with freehold and leasehold of property. We'll say that the people enjoying the use of the property are the current 'owners', who have leased the property from the freeholder. The current 'owners' could in turn sub-let the property to others who would become the new 'owners'. However, whilst the lease can be sub-let ad infinitum, ultimately ownership will always revert to the freeholder, i.e. the guy at the top. And only the freeholder can transfer the freehold.



In the case of distributed state, an initial, small group of computers are given freehold over various portions of the state. They will also be the owners initially, but joining computers will lease ownership of state from them. Any time a computer disappears, its leasehold reverts up the chain, until it reaches the freeholder. Of course, if the freeholder disappears, then the highest leaseholder can claim the freehold (until or unless the original freeholder re-appears).

So, it is possible that a founding, root computer can be removed from the system. As long as the root has given up all its ownership to its delegates, then it can abdicate, leaving its delegates to become new roots.

Keeping Track of Responsibility

Now we could allow any computer to be responsible to multiple computers, or we could say that a computer's responsibilities must always be obtained via a single superior. The choice is between a single hierarchy where computers are always inferior or superior to each other for everything, or a multitudinous hierarchy where the relationship is dependent upon the particular item of state concerned. In the latter case it would be like working for many different companies simultaneously, and depending upon the matter at hand, Fred could be Jim's boss on one thing, and Jim could be Fred's boss on another.

I suspect that it may be simpler and easier to administrate if we adopt the single hierarchy. In other words, the relationship between computers as far as responsibility is concerned is absolute: either you are inferior or superior. Of course, you can have peer relationships, but responsibility is not conveyed by those.

Now the advantage in doing this is that it's simple to keep track of who granted you responsibility for anything. If you're responsible, then you know that there's only a single superior that you obtained it from. This means less storage is used to record such details. It also tends to make it fairly important that a superior is chosen carefully, and must be selected such that it has a good match in terms of the objects it is responsible for and how well they match the inferior's interests.

I would not rule the multitudinous hierarchy as unsound out of hand, indeed it may be well worth future research, but I have a hunch things will be simpler if we have the computer as our nodal, hierarchical unit, as opposed to the item of state. I think I'm inclined this way because it is computers that are the nodes that appear on the network, or disappear from it.

Object Lifecycle

Who's God?
In the beginning someone has to create the virtual world. In fact it maybe necessary for the creation process to be an ongoing one even whilst the world is being explored by players. We just grant creation privileges to certain people who can import or create new objects in the virtual world as they see fit. These objects will be implicitly, initially owned by their computer, and will be immediately up for competitive ownership requests from any other computers operating in the vicinity. For example, if a god drops a bear into a forest, there may be no avatars in the vicinity whose computers would be interested in owning the bear, however there may be a large computer that has a current interest in the forest and all its denizens (because of a large previous interest in that area) and it may well make a bid for ownership of the bear. Alternatively if the god has an avatar in the forest near where the bear has been placed then the god's computer may retain ownership. Who knows? It's all a matter of heuristics.

How do Objects Create New Objects, and Who Owns Them?
Behavioral modeling is duplicated, but this is because it is safe to do so in the knowledge that divergent state will be overridden by incoming updates. This is because state is uniquely named and it is possible to map divergent versions to the respectively named objects.

The question is though, as to who creates objects in the first place? A player may wave a wand and create a chicken, and this will be replicated on the computers of any witnesses' computers. However, what about the eggs that the chicken lays? Does each chicken (the original and its duplicate shadows) lay an egg independently and somehow they all have the same sequence of serial numbers and thus the eggs coincide that way? How about if the number of eggs were dependent on an environmental factor? It might be that some shadow chickens would produce more than others because the environment was slightly different. And who owns the eggs? Each egg can only be owned by a single owner, and while it may seem logical that the owner of the chicken is a good candidate as the initial owner of the eggs, who owns the spurious egg generated by a divergent shadow chicken?

Nope, to keep things manageable, we'll have to require that object creation is a privilege of an owned object, i.e. that whilst behavioral prediction is ok, creational prediction is not. The shadow chickens will go through the same egg-laying motions, but the eggs will not appear until a state update arrives with details of new egg objects in the vicinity.

Object Transformation
Now it may seem a bit disappointing that whilst one can exploit predictable behavior, one cannot exploit predictable creation. There is a way round this problem and that is predictable transformation - which is just a different perspective on behavior really. What you do is you have a couple of eggs lying around inside each chicken, created in advance - you don't wait for the laying process to create the objects. This means the egg objects can undergo a predictable transformation from invisible and internal, to visible, external and independent. All the shadow chickens can now predictively lay their eggs, and have them appear, because they were cached.

That actual creation is so difficult is all well and good I suppose, because it is difficult in the real world too. Objects tend to be constructed out of collected components, rather than simply materialize out of thin air. Torpedoes are loaded into a submarine, fired at some point, and then explode into debris - like many objects they'll travel with the object that issues them.

Object Destruction
Object destruction is quite strange really. Once an object has gone through its lifecycle from 'new', through any phases such as 'in flight', 'exploding', 'exploded', 'invisible', and is finally of no relevance to anyone, it doesn't get destroyed as such, it's simply of no interest. It will eventually fall out of everyone's database as one of the lowest priority objects you can have. There's no need to specifically ensure that the object must be removed from everyone's computer. In fact, it's probably safest that way, because a prediction that a shadow object has blown up, may be overruled a second later by an update from the owned object that says it didn't*.
[*Don't start complaining about these anomalous glitches now. Remember we've sold our souls to get a corrective glitch that lasts a second instead of taking 10 seconds to make sure we don't make a mistake in the first place.]

Object Over-Production
It's possible I suppose that there may be some nefarious gods out there that might think it fun to create a broomstick or some kind of automata that auto-replicates without end: "Hey let's bring the system down by filling it up with bunny rabbits!".


I guess one just has to create a disease object that attaches itself to a bunny that attaches itself to any other bunny it gets close to, which shortens the lifespan of the bunny depending upon the rate of contact, i.e. the more bunnies there are in an area, the quicker they die off.

Remember that a computer is never obliged to store details of everything. Indeed the rules of interest may determine that the 100th bunny is inherently less interesting than the 1,000th bunny, and so over-population may not present too much of an operational problem, even if it causes a pollution problem.

One will just have to try and secure the privileges that enable god-like powers. And here we touch on security, but this is another article all by itself.

What Next?

Other things that remain to be discussed in the next installments:

  • Implementation Details

  • Solving Connection and Network Problems

  • Security

  • Administration

  • Designing Worlds, Designing Games

Read more about:

Features2000

About the Author

Crosbie Fitch

Blogger

Crosbie Fitch has recently been exploring business models for online games. He has previously worked at Argonaut, Cinesite, Pepper's Ghost, Acclaim, and most recently Computer Artworks. Always looking for the opportunities that combine 'large scale entertainment', 'leading edge technology', 'online', and 'interaction' - one day, all at the same time... He can be reached at [email protected]

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like