Why can't you tilt your view or look straight upward?

Have a question, suggestion, or comment about Aleph One's features and functionality (Lua, MML, the engine itself, etc)? Post such topics here.

Post Dec 31st '08, 11:15

I remember being asked about that, and my answer at the time was that I wanted to keep the visibility calculations as close to the original as possible. That is indeed correct; when I wrote my Marathon Map Viewer, I'd done more general visibility calculations for that, but I could not get it exactly right. It would make subtle errors, but errors nonetheless.

There is also the problem of doing visibility for sprites, since they can extend over several map polygons.

I've been researching the possibility of using more advanced renderers, because they contain the necessary coding for the more advanced rendering that we might want to do, from general-direction rendering to OpenGL shaders. I've looked at:
  • Crystal Space -- looks like the most mature project. It uses rendering portals, making it possible to do 5D space with it. It also does collision testing, and CS's developers have been developing some object management to go along with it, the Crystal Entity Layer. Its developers are even working on a demo game, "Crystal Core", though it seems rather immature.
  • OGRE -- a renderer only, no collision testing or physics. However, it can use rendering portals.
  • Irrlicht -- a bit like OGRE
So I think I'll go with Crystal Space, at least for now.

The most serious problem with such engines is that they seem to lack facilities for doing automapping. The Marathon engine's visibility calculator keeps track of every surface that was rendered, and uses that information to construct the automap. Most other well-known extruded-2D engines have had automaps, but most of the better-known full-3D engines have not had automapping capability.

I think that it is essential to have an automap, because without one, it can be difficult to find one's way around, at least for me. One can see the effect of automap-less level design in many of the levels designed for engines without them -- those levels often have a linear topology of player path in them. Nonlinearity is often MUCH more interesting, but it produces the problem of keeping track of where one has been, a problem easily solved with an automap.

I have not been very successful in finding out how to get visibility-test results in Crystal Space. There is a Crystal Entity Layer interface for a trigger that fires when some object becomes visible - iWatchQuestTriggerFactory - but it will be hard to find out what parts of CS it uses without studying its source code.

In any case, I've signed up at the Crystal Space site, and I'll be asking about visibility testing, liquids, etc. there.
Last edited by lpetrich on Dec 31st '08, 11:20, edited 1 time in total.
User avatar

lpetrich

Post Dec 31st '08, 11:46

I thought MML made it possible to look straight upward...
User avatar

Shadowbreaker
Melbourne, Victoria

Post Dec 31st '08, 14:23

Shadowbreaker wrote:I thought MML made it possible to look straight upward...

It's possible to look straight up just by using custom physics, but the engine renders things in such a way that everything looks like crap if you do so. Petrich is talking about using new rendering code to get around this.
underworld : simple fun netmaps // prahblum peack : simple rejected netmaps
azure dreams : simple horrible netmaps // v6.0!!!: thomas mann's greatest hits : simple simple netmaps
User avatar

irons
(.Y.)

Post Jan 1st '09, 05:39

I don't think I've ever wanted to look straight up in my entire life as a Marathon player. I love the engine just the way it is.
User avatar

Iritscen

Post Jan 4th '09, 03:32

Unless of course, you would like to live your life in ignorance of me sitting on your head with a SPNKR.

And then cursing my inability to aim down far enough to shoot you.
I have been wading in a long river and my feet are wet.
User avatar

L'howon
Somewhere outside the Citadel Of Antiquity

Post Jan 4th '09, 04:55

Lh wrote:Unless of course, you would like to live your life in ignorance of me sitting on your head with a SPNKR.

And then cursing my inability to aim down far enough to shoot you.


I can't believe it but I got a pretty decent laugh out of this.


User avatar

logan

Post Jan 4th '09, 10:55

Iritscen wrote:I don't think I've ever wanted to look straight up in my entire life as a Marathon player.


How do you look at the stars at night then?
acks45
20 Minowere Dr. Fromidlov, Canada.

Post Jan 4th '09, 19:18

Maybe some insight can be gathered by checking out the source code from engines that faced similar problems. I don't know if eDuke32 is open source (I would suspect it would have to be) but it accomplished a 3 point perspective with Duke Nukem 3D, and the build engine had weird tricks to do slopes and rooms over rooms (also sprite objects) so the programmers may have faced similar challenges. Somebody very recently did this from scratch (and with no source code!) for Dark Forces, I may be able to ask him if he had any similar issues.
sweatervest

Post Jan 4th '09, 20:42

ack45 wrote:How do you look at the stars at night then?

Iritscen wrote:I don't think I've ever wanted to look straight up in my entire life as a MARATHON PLAYER


User avatar

logan

Post Jan 4th '09, 21:37

lpetrich wrote:I remember being asked about that, and my answer at the time was that I wanted to keep the visibility calculations as close to the original as possible. That is indeed correct; when I wrote my Marathon Map Viewer, I'd done more general visibility calculations for that, but I could not get it exactly right. It would make subtle errors, but errors nonetheless.

An improved renderer would be great. I'm not sure what you mean here, though. Correct perspective and keeping the visibility close to the original are mutually exclusive, are they not? Any difference in geometry visibility calculations could be called an "error", but could be an improvement.

There is also the problem of doing visibility for sprites, since they can extend over several map polygons.

What about the z-buffering you added? Could that be retained, and would it do the job for sprites in a new renderer?

I don't see a solution for the automap, other than taking Crystal Space or whichever and adding the same strategy of keeping track of every face rendered. Another problem is landscapes, which are currently projected cylindrically. This means if you look all the way up 'outdoors', there's nothing to see, unless we distort the image into a sphere or other closed surface.
User avatar

Crater Creator

Post Jan 5th '09, 12:12

ack45 wrote:How do you look at the stars at night then?

By backpedaling while looking at 30 degrees upwards.

On topic, I think this is a pretty cool project. It would put all the fanboys' pandering for bump mapping and "true 3D rendering" at rest, at least. Also, it's great to see you posting more often.
Last edited by chinkeeyong on Jan 5th '09, 12:14, edited 1 time in total.
Embrace imagination.
User avatar

chinkeeyong
Singapore

Post Jan 5th '09, 17:03

Crater Creator wrote:Another problem is landscapes, which are currently projected cylindrically. This means if you look all the way up 'outdoors', there's nothing to see, unless we distort the image into a sphere or other closed surface.


A temporary solution might involve some type of extrapolation of the top row of pixels of the sky image so as to fill up the empty space with an appropriate solid color. Maybe someone would be interested in drawing some Lh'owon skyboxes down the road.

chinkeeyong wrote:It would put all the fanboys' pandering for bump mapping and "true 3D rendering" at rest, at least.


Uggg FINALLY!! [MTongue]
Last edited by sweatervest on Jan 5th '09, 17:05, edited 1 time in total.
sweatervest

Post Jan 5th '09, 18:26

If you can not solve the automap problem, or even if it looks like you will be going down a rathole trying to solve it (which I infer), I don't think that's a showstopper. If the user can choose a more modern view of the game, but has to sacrifice the visibility automap in lieu of a visited automap, there are probably some who would make that choice. Most of us don't use the map at all in net games, for instance.

As long as the original renderer with visibility automap is available for purists.
User avatar

treellama
Pittsburgh

Post Jan 5th '09, 19:22

The idea of z-buffering / depth buffering is to keep track at which depth a pixel is rendered at -- it is for making pre-render depth sorting unnecessary, since that sorting will be accomplished as each pixel is rendered. And if one can avoid depth sorting, then one can avoid doing a lot of coding.

But the problem is that for z-buffering to work correctly, a surface's opacity must be all-or-none. The appearance of partial-opacity surfaces will be dependent on their rendering order, meaning that one is stuck with doing depth sorting before rendering them.

I had come to think that I had not gotten the depth sorting quite right in my Map Viewer; that had likely caused its visibility bugs.


To do landscapes in full 3D, one would create a barrel-shaped skybox with the landscape texture-mapped onto the sides. The top and bottom caps of the skybox would have flat (textureless) colors composed from average values of the top and bottom landscape-pixel values.


As to visibility, I've thought of a kludge for more recent engines, one inspired by Oni's visibility system. It's to implement a sort of "visibility radar", having one's camera shoot "visibility projectiles", and then tracking where they go and which places they hit. Crystal Space's collision system makes a list of the polygons that collided, and one can compare those polygons with all the map and scenery ones. One would likely create a hash table to speed up the search, the hash keys being composed from the polygons' vertex coordinates. The hash-table values would be ID's of the polygons and their owner objects, and those values would go into the automap.

Come to think of it, Oni's creators could have done an automap with the help of their visibility system.
User avatar

lpetrich

Post Jan 5th '09, 20:07

lpetrich wrote:I had come to think that I had not gotten the depth sorting quite right in my Map Viewer; that had likely caused its visibility bugs.

Map Viewer uses all-or-nothing opacity, doesn't it? So the Z-buffer should be enough, and that's what it uses. I haven't noticed any visibility bugs.

But, it does make sense that the same system could not be used for your later Aleph One OpenGL renderer, since the engine supports semitransparent sprites and (*shudder*) 3D models. Although, disabling the Z-buffer bogs down any moderately complex model rendering so much that I'm not sure it's even worth supporting semitransparent sprites and 3D models at the same time :D
Last edited by treellama on Jan 5th '09, 20:07, edited 1 time in total.
User avatar

treellama
Pittsburgh

Post Jan 5th '09, 21:51

Just out of curiosity, would it in general be more problematic to try tweaking the already existent opengl renderer to add these features instead of going for a totally new renderer? I guess what it comes down to is how the opengl renderer actually does its thing. Does it construct the scene in three dimensions and handle all the transformations itself, or does A1 do the transforms in software and use opengl to draw the "prepared" polygons (or something different)? Basically what I am thinking is A1 is already handling visibility correctly so would it be possible to maintain those correct calculations and only adjust the transformations, or does changing the perspective mode inherently involve new visibility calculations?
sweatervest

Post Jan 5th '09, 22:02

sweatervest wrote:or does changing the perspective mode inherently involve new visibility calculations?

Changing your line of sight involves new visibility calculations, yes. Go figure.

To answer your other question, which I believe I have already answered, the OpenGL renderer does the same calculations that the software renderer does, for accuracy.
Last edited by treellama on Jan 5th '09, 22:03, edited 1 time in total.
User avatar

treellama
Pittsburgh

Post Jan 6th '09, 03:28

chinkeeyong wrote:On topic, I think this is a pretty cool project. It would put all the fanboys' pandering for bump mapping and "true 3D rendering" at rest, at least.

Back up a moment. Are we talking about using Crystal Space's complete graphics subsystem, which would give us all the graphical features that engine supports (like fancy shaders)? Or just splicing in the part which renders polygons with correct perspective? The way I'm reading Loren's posts, I think the answer is the former, which would be pretty awesome.
User avatar

Crater Creator

Post Jan 6th '09, 05:33

Treellama wrote:Changing your line of sight involves new visibility calculations, yes


But does using a different perspective transform change the line of sight? I would figure if the camera and the objects being viewed don't change positions the line of sight would be unaffected.
sweatervest

Post Jan 6th '09, 08:17

Crater Creator wrote:Back up a moment. Are we talking about using Crystal Space's complete graphics subsystem, which would give us all the graphical features that engine supports (like fancy shaders)? Or just splicing in the part which renders polygons with correct perspective? The way I'm reading Loren's posts, I think the answer is the former, which would be pretty awesome.

You are reading my posts correctly -- a Crystal Space renderer would have ready-made code for doing bumpmapping and other fancy shader effects.

One would still have to define bump textures to go with the other ones, however.

The lighting I implemented for 3D models is rather simple:

I(vertex) = I(floor)*(1-m)/2 + I(ceiling)*(1+m)/2

where m = (normal vector)*(upward vector)

And I think that one would do the same thing for bumpmapped walls. One could precompute an upward-lit version and a downward-lit one, then render them with the intensities of the ceiling and floor lights, respectively. Since the walls have their own built-in lighting, one can set up a bumpmap-lighting fraction. This approach will not require much shader fancy footwork, if any.

Point lights are more difficult. One could define certain inhabitant objects to be self-luminous, and then compute the effect of their lights on their environments. One could do that by rendering an appropriately-scaled bright-spot texture, but with a bumpmap, that would be a *very* tricky task. One may have to render upward, leftward, downward, and rightward bright-spot textures combined with apppropriate bumpmap textures, then add them together. And to get it working efficiently, one may need pixel-shader programming.
User avatar

lpetrich

Post Jan 6th '09, 15:23

sweatervest wrote:But does using a different perspective transform change the line of sight?

Yes, of course.
I would figure if the camera and the objects being viewed don't change positions the line of sight would be unaffected.

If the camera didn't change positions, wouldn't the scene look the same?
User avatar

treellama
Pittsburgh

Post Jan 6th '09, 19:02

Treellama wrote:Yes, of course.


According to this, "In perspective projection all lines of sight start at a single point". I assume this means any perspective projection (1-point, 2-point, etc.) and the single point is the camera which suggests to me that the lines of sight remain the same so long as the camera does not change positions.

Treellama wrote:If the camera didn't change positions, wouldn't the scene look the same?


I would think that what you can see (i.e. what is obstructed and what isn't) would be the same but where stuff actually shows up on the screen could be changed by using a different perspective transform. I would think switching from a 2-point to 3-point perspective would definitely change the way a scene looks (unless of course you are looking straight forward) without having to view the scene from a different place.

All that shading stuff sounds really interesting. Before worrying about fancy lighting effects what about starting off with just dynamic lights (or would that be harder?) Like for example to have a shooting enemy or grenade light up the surfaces around it. Then that could open up the way for real-time shadows (imagine netgames with all those SPNKRs flying around!) Full screen motion blur would be a really interesting effect to see in A1 too.
sweatervest

Post Jan 6th '09, 19:48

So if you fire a ray parallel to the ground, you will have the same things obstructed as if you fire a ray inclined 30 degrees from the ground??

I am getting dangerously close to posting the picture I drew in Paint but didn't post because I didn't want to look condescending!
Last edited by treellama on Jan 6th '09, 19:48, edited 1 time in total.
User avatar

treellama
Pittsburgh

Post Jan 7th '09, 01:28

Treellama wrote:So if you fire a ray parallel to the ground, you will have the same things obstructed as if you fire a ray inclined 30 degrees from the ground??

I am getting dangerously close to posting the picture I drew in Paint but didn't post because I didn't want to look condescending!


Haha dude don't worry about that, I think we might be talking about different things here. Like what I am talking about is for a given scene say you want to find out if a given point in the level is visible, so you fire a ray from your camera to the point to see if it intersects anything else first. That visibility test would be a function of the camera position and the positions of the point and any objects that may be in the way, but not a function of the perspective transform that is used to draw what is visible on screen.

I would think both lines of sight you described would be fired regardless of the perspective mode, to determine what is visible at different parts of the screen. So whatever the ray parallel to the ground hits first it hits that first no matter what the perspective transform ends up being. It would only change if the camera moved or if something in the scene moved, which would change the actual lines of sight. Same thing as the ray 30 degrees from ground. It can of course hit something different than the first ray, but it hits what it does regardless of whether it's viewed with 2-point perspective or 3-point perspective. What it hits may show up on a different part of the screen, but the same object is visible in both cases, right?
sweatervest

Post Jan 7th '09, 02:21

sweatervest wrote:It would only change if the camera moved

For example, if the camera is tilted instead of perpendicular to the ground. I.e. the entire point of this thread.
User avatar

treellama
Pittsburgh

Next

Return to Aleph One Discussion



Who is online

Users browsing this forum: No registered users