Distance-based descriptions
Goto page Previous  1, 2
 
Post new topic   Reply to topic    mudlab.org Forum Index -> Design
View previous topic :: View next topic  
Author Message
Tonitrus



Joined: 11 Feb 2010
Posts: 20

PostPosted: Thu Nov 18, 2010 6:31 am    Post subject: Reply with quote

My approach to sensory messages works a bit differently. In the simplest form, I have 5 senses, each with a "score" and a "range". Say 4 is a good eyesight, and 100 is the normal range, a near-sighted person with excellent up-close vision might have a score of 6 but a range of only 25. Within their range, they'd have better vision than the (10,100) person, but their vision would taper off rapidly.

Senses out of the way, I have "messages" for sensory information. In the simplest form, they work like this:

<sense>@<difficulty>: <message>


A skill-check is made against the sense score with target difficulty, if the check is passed, the message is seen, otherwise it isn't. As I have an unhealthy obsession with powers and logarithms, range works as a soft cap, where distance being over the range increases the difficulty as follows: -1 for 2, -2 for 4, -3 for 8, -4 for 16, -5 for 32. Note that my skill system uses small (but floating point) values, and -5 is a severe penalty. I'm using a coordinate system, for reference. Also, note that, since range increases the difficulty, messages intended to be visible from longer ranges should have lower difficulties.

At any rate, python is pretty hilariously amenable to ridiculous data structures, so the system is pretty flexible. I'll give an example of my "say" command.

Tonitrus says "Hello."

In simplest form:

Code:
{'hearing': -4}, "Tonitrus says 'Hello.'"


However, the message is actually a list of checks, so a single message can contain any number of individual messages, the first successful check fires that message and breaks. So the full version of my say command looks more like this:

Code:

[[{'hearing':  -4}, "Tonitrus says 'Hello.'"],
 [{'hearing':  -6}, "Tonitrus says something."],
 [{'hearing': -10}, "You hear a voice somewhere."]]


Each check is made in order, first success shows that message, then ends. Note that this is all a single message. Only one of those messages will show. But that wasn't flexible enough for me. The checks are a list also (a python dictionary), where every single check in the list must pass to see a particular message, allowing for things like:

Code:

[[[{'hearing': -4, 'sight': -5}, "The sky flashes with lightning and thunder roars around you."],
  [{'hearing': -6, 'sight': -10}, "The sky flashes, and a clap of thunder sounds."],
  [{'sight':  -4}, "The sky flashes with lightning."],
  [{'sight': -10}, "The sky flashes."],
  [{'hearing': -6}, "Thunder roars around you."],
  [{'hearing': -10}, "You hear a loud rumbling."],
  [{'smell': -5}, "The scent of ozone fills the air."],
  [{'touch': -10}, "Your hair stands on end."]]]



Showing multiple messages is easy enough also, simply send two different sets of messages. Things like blindness and deafness could easily be implemented as penalties to the sense scores.

This is a pretty good approach, I think, although it does have some have some issues that bug me a bit. Sensory information in written text is kind of annoying to separate out, and this system doesn't really require you to as much as any other system I've seen, however, sometimes it seems like sensory information might be in the wrong order. A werewolf, for example, might have scent as his preferred sense, making the order of other senses in the messages a bit of a nuisance for him. In this particular case, he could close his eyes to intentionally fail the sight checks, but I wonder if there are other such situations that are not so easily dealt with. On the whole, I think it's a pretty solid design, and while I haven't tested my code much (I'm still very pre-alpha, and I take long breaks from it), it worked pretty well for all the examples I could think of.

One thing that does bug me slightly is that I don't have any code for organizing the messages themselves, nor do I have a reliable approach for doing so. Meaning the system only really works properly if I hand-craft the messages to follow a certain layout (easiest checks happen earlier than harder checks, multiple sensory information happens before single, sight and hearing are given preference, etc.) This bugs me a bit on principle, as screwing up the order will cause the messages to behave erratically, but it also bugs me because it doesn't allow me to sort by any preferred sense. Say a werewolf wants to sniff, for example, it might be useful to re-sort lists to favor smell. Or perhaps such things could be implemented by given the ability to auto-fail certain senses. If a werewolf elects via some command (sniff, perhaps) to fail all other senses, he'll only see sensory information pertaining to scent.

As a bonus, I realized after writing it that this system could also allow for checks against arbitrary skills to show different messages based on skill levels. While this could be interesting in general, it seems particularly interesting in the case of languages. I think sensory information combined with skill checks should probably be kept to a minimum, though, for sanity's sake.

Hopefully someone else finds this useful. I actually designed/implemented this a long time ago (6 months to a year), but it didn't occur to me that it'd be useful to anyone else until KaVir suggested I post information about it here.
Back to top
View user's profile Send private message
Author Message
Kernal



Joined: 01 Jul 2007
Posts: 16

PostPosted: Fri Nov 19, 2010 5:27 am    Post subject: Reply with quote

Very nice Tonitrus, thanks for posting. I have been thinking about something similar for some time, but haven't gotten around to an implementation.

Do you include continuous events? For example, the wind howling.
Perhaps a more important example would be a sustained fight heard next door. You could send a message corresponding to every event, but that would constitute a lot of messages:
You hear the clash of metal on metal nearby.
You hear the whistle of a sword cutting through the air nearby.
You hear the crackling of electricity to the east.
etc. etc. Depending on the rate and repetitiveness of the actions, this could be quite annoying. Perhaps a system could be included for continuous sounds to message the beginning, end, and some intervals. So:
You hear an explosion to the east.
You hear the ring of a sword leaving its scabbard.
You hear the telltale sounds of a battle to the east.
<pause>
You hear the telltale sounds of a battle to the east.
<pause> .....etc.
The sounds of battle to the east abruptly cease.

Cheers,
Kernal
Back to top
View user's profile Send private message AIM Address
Author Message
Tonitrus



Joined: 11 Feb 2010
Posts: 20

PostPosted: Fri Nov 19, 2010 2:47 pm    Post subject: Reply with quote

Kernal wrote:
Do you include continuous events? For example, the wind howling.
Perhaps a more important example would be a sustained fight heard next door. You could send a message corresponding to every event, but that would constitute a lot of messages:
You hear the clash of metal on metal nearby.
You hear the whistle of a sword cutting through the air nearby.
You hear the crackling of electricity to the east.
etc. etc. Depending on the rate and repetitiveness of the actions, this could be quite annoying. Perhaps a system could be included for continuous sounds to message the beginning, end, and some intervals. So:
You hear an explosion to the east.
You hear the ring of a sword leaving its scabbard.
You hear the telltale sounds of a battle to the east.
<pause>
You hear the telltale sounds of a battle to the east.
<pause> .....etc.
The sounds of battle to the east abruptly cease.



Sadly, no, although I've been thinking about continuous sensory events off and on for awhile. The approach I've always considered is tracking sensory information on a character as it's perceived, but I don't think that's terribly viable. Recently I considered creating sensory objects, but I haven't thought this through as much as I might like. Certain sensory information would be instant, which would work as normal, but lasting sensory information could be represented with sensory objects. Or maybe even instant sensory messages should be contained in very short-lived objects (to gain the advantages mentioned below). The basic idea is that a sensory object would work something like this, with messages meaning compound messages as I described above:

location = where the sensory object is centered from
creation_message = a message broadcasted as an instant effect when the object comes into existence
destruction_message = a message broadcasted as an instant effect when the object is destroyed
description = a message describing the object as perceived while it remains
duration = how long it lasts
conditions = what conditions are required for the object to continue to exist (list of boolean functions and arguments)

And probably other stuff I haven't thought of yet.

What actually started me thinking about this was the "surge of power" messages from God Wars II, although I'd considered similar things as anti-spam devices. When someone shifts near you, you see a message like "You sense a surge of power [whatever distance and direction away]." I thought it might be nice to have spells emit a similar message, and to have the sense linger while a spell is cast. This seemed pretty spammy, so I figured similar sensory objects should just merge with one another. At any rate, supposing someone was casting some huge spell that takes a few minutes to cast, you'd see the initial message ("You sense a surge of power [distance, direction]"), but the object would be created in that place, representing that sense. I guess seeing other characters would have to be handled by sensory objects too, hmm. At any rate, the message would continue to show if you typed a relevent sensory command, and similar messages would collapse together. I.e., 5 continuous surges of power in approximately the same place would just show the first creation message and a single message for all of them.

Theoretically, nearby sensory objects could collapse together if they have certain properties in common, and new sensory objects would just alter existing ones. I'd have to figure out some way of sorting the priority of sensory information in order to make it work properly with my sensory information, I think, but it'd be a pretty flexible system, if so. Maybe some sort of priority could be assigned. Drawing a sword is harder to hear than the clang of a sword hitting another sword, so, by my system, it'd make sense to put the drawing first, as it's harder to detect. However, then the clang message wouldn't be heard by perceptive people. Perhaps a system of prioritization of sensory messages would be the best way to go for sorting them. Or maybe sword clangs and other such jarring sounds should have their own object. Or maybe clangs of swords should always drown out sounds like drawing weapons, sound does kind of work that way, after all.

Another thing that might be interesting... with the way my messages work, you could just daisy chain a lot of messages in one container message, and sooner or later people will fail certain checks and get a chance at the next message in the queue, meaning that it could be used to generate highly variable messages. Combat, for example, might have all the sounds of drawing swords and breaking shields, and people falling down rickety stairs, but people would not tend to see most of the messages. They'd see various selections from the list, and people could use sense commands to focus on getting more information. That way, continuous sensory information could be used as a monotony-breaker, and exploration device, an anti-spam mechanic, and a thematic device all at once.
Back to top
View user's profile Send private message
Author Message
Kernal



Joined: 01 Jul 2007
Posts: 16

PostPosted: Fri Nov 19, 2010 4:54 pm    Post subject: Reply with quote

I've been considering a continuous-noise system which has two rules (although assumes a reasonable grouping of events; ie classifying all sounds from the battle as being associated as one event; this way unrelated sounds, like someone talking in the room with you, won't be overwritten).

Rule 1: A new sound is only displayed if it's the new loudest sound.
Rule 2: The effective loudness of a sound decays exponentially, with maybe a 10s decay time. The effective loudness may be scaled, as well, as a tweaking measure.

In this way the ramp-up of a battle is heard (drawing swords, muttering insults, scuffling feet, then clang of metal), and the most dramatic occurrences also are heard. The repetitive and quieter sounds are not displayed.

The system could be extended to include things quieting down, as well, although I don't immediately see a particularly graceful way of doing so. One possibility is a check 5 or so seconds after the last message. If no new message has been displayed, a "quieting" message can be send.

Cheers,
Kernal
Back to top
View user's profile Send private message AIM Address
Author Message
Tonitrus



Joined: 11 Feb 2010
Posts: 20

PostPosted: Sat Nov 20, 2010 2:04 am    Post subject: Reply with quote

Kernal wrote:
[...] assumes a reasonable grouping of events; ie classifying all sounds from the battle as being associated as one event; this way unrelated sounds, like someone talking in the room with you, won't be overwritten


Ideally, events would be grouped as you describe, however I haven't thought of a viable approach to determine what events are related. How do you intend to determine what sounds/sensory messages are part of the same overall event?
Back to top
View user's profile Send private message
Author Message
Kernal



Joined: 01 Jul 2007
Posts: 16

PostPosted: Sat Nov 20, 2010 3:02 am    Post subject: Reply with quote

I suppose that's the crux of the issue. I haven't come up with anything I'm convinced would work, but I can outline a rough scheme below. Unfortunately, special cases complicate everything.

Include some appropriate object which keeps track of the necessary info (last broadcasted event, location, etc) as well as a list of contributors. Include in the class of each possible contributor a reference to a continuous event.

Whenever two contributors to continuous events "interact", their respective events are merged; they are now contributing to the same continuous event.

The definition of "interact" is a little hazy. A simple start would be literal interactions. For example, if I hit you with a sword, we are now both counted under the umbrella of a single continuous combat. If Bob then hits me, all three of us are considered to be part of one continuous event. If some amount of time passes without a new broadcast from the event (maybe, three decay times), then the continuous event is considered ended.

By liberally applying "interaction" checks, the system could be expanded nicely. For example, you could imagine a busy road spamming messages of people walking by and the sounds of footsteps. Despite the lack of any physical "interaction", these can all be lumped together as one continuous event.

It would likely be important to include a "type" of continuous event, as well. For example, combat would be one type, and environmental might be another. Environmental effects could combine with other environmentals, and combats with combats, but an environmental could not combine with a combat; even if you're fighting an air elemental, your actions should not contribute to continuous weather messages.

Cheers,
Kernal
Back to top
View user's profile Send private message AIM Address
Author Message
Tonitrus



Joined: 11 Feb 2010
Posts: 20

PostPosted: Tue Nov 23, 2010 7:22 am    Post subject: Reply with quote

Seems ok in theory, but I find myself having trouble with the specifics (the special cases you mentioned). Let's say you have a group of people talking, standing in the same "room". Each of them says something, we track that their voices all come from the same place, and we lump them together as one event. People outside the conversation hear bits of the conversation from the conversation event, but miss most of the details. People within the conversation presumably get the full (or mostly full) version of what is going on. Ok, that seems simple enough.

This actually is similar to an idea I was thinking about recently that allowed for sort of quasi-turn-based combat that still was more or less real-time. In my system, people could queue attacks as normal, and, when they interacted, a "battle" object would be created that would run through the queued messages one at a time, one message per x period of time. This was intended to keep combat flowing at a sane rate, even when things got heated. Seems simple enough, create an object when people fight, when another person joins into a fight, add them to the combat.

It quickly got messy, however. I'll give you an example. You and I, and a dozen other people, fight with swords in a room. Just at this instance, the assassin who has been paid to kill me fires upon me. He is at maximum range for his bow. He joins the combat we are in, now subjecting him to the same queue we are subject to. Except he can't see most of the combat, he's too far away, so he experiences it as lag. Now, my idea actually consisted of overriding the normal turn sequence for rounds, but hopefully that demonstrates a potential problem. Let's say we have 2 people talking, 5 feet away. We can smudge the distance and count the sound as coming from the same source, and treat it as one event object. But people can easily communicate from 50 feet away. If we have two people, 50 feet apart, talking to one another, do we still consider it to be one object? Presumably. But now supposing we have a chain of 3 people, 50 feet apart, with the two people on the end communicating with the one in the middle, but unable to hear the other. In this particular case, there'd still be some sound overheard, but that is not always the case. We'll say for the purposes of discussion that environmental conditions or heavy-handed coding says that A and C can't hear each other, but each can hear B, and B can hear both A and C. How do we track this as one event object? And if we have objects of this nature, do we manually exclude people who can't detect one another, and keep the same object? For reference, I'm assuming that people can't detect one another if not part of the same object to demonstrate how objects being mismatched could cause difficulties. In practice, I think it would be a little bit weirder, with bits of seemingly similar data mysteriously matching up (e.g., a screaming knife fight and an (errantly) audible whisper spoken by someone in the fight.)

I wonder if it might be preferable to have a sorted list of events somewhere, then smudge them into singular sense "objects" for the observer only. I.e., 10 events are happening in my earshot, now the code arranges them, sorts them, and presents them to me in the form of a sensory object that I can partially interact with (by seeing a selection of the messages). And if there are not individuals that can't detect one another, as in the A-B-C example, I would expect sensory objects and liberal interaction checks to lead to massive sensory objects that take up large amounts of space. e.g., a chain of events might include the man 3 blocks down the street in my conversation with the mail man.

Any thoughts? This is of use to me, both for sensory information, and possibly the fight system I was considering.
Back to top
View user's profile Send private message
Author Message
Kernal



Joined: 01 Jul 2007
Posts: 16

PostPosted: Mon Nov 29, 2010 1:18 pm    Post subject: Reply with quote

Tonitrus wrote:

I wonder if it might be preferable to have a sorted list of events somewhere, then smudge them into singular sense "objects" for the observer only. I.e., 10 events are happening in my earshot, now the code arranges them, sorts them, and presents them to me in the form of a sensory object that I can partially interact with (by seeing a selection of the messages). And if there are not individuals that can't detect one another, as in the A-B-C example, I would expect sensory objects and liberal interaction checks to lead to massive sensory objects that take up large amounts of space. e.g., a chain of events might include the man 3 blocks down the street in my conversation with the mail man.

Any thoughts? This is of use to me, both for sensory information, and possibly the fight system I was considering.


A few things: I think your concerns about 'objects' spanning unreasonable distances is certainly warranted; an easy first-step is to restrict each object to a room - leaving the room removes you from that event object. It's not as general as I'd like to see, but I can't imagine many important scenarios that would require large, spanning event objects.

Your proposal of maintaining a list of sensory events is probably more graceful and flexible, but, unless I'm missing something, seems to be unable to distinguish between a new sensory event and a continuation of an existing sensory event. For example, if a fight begins, that should be broadcast immediately. What I've been discussing is a method of reducing the number of subsequent messages broadcast, while maintaining a reasonable rate of information flow and update. Ideally, all the information should still be available in some form (not completely sure how this would be achieved, but some loss due to confusion is reasonable).

Cheers,
Kernal
Back to top
View user's profile Send private message AIM Address
Display posts from previous:   
Post new topic   Reply to topic    mudlab.org Forum Index -> Design All times are GMT
Goto page Previous  1, 2
Page 2 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

Powered by phpBB © 2001, 2002 phpBB Group
BBTech Template by © 2003-04 MDesign