|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
Q: I'm using an Integrated Graphics Card and I'm having problems with artefacts appearing and my frame rate is very poor. This is quite a common problem when using an integrated graphics cards (such as Intel 82845) Most OpenGL based programs such as Vega, Performer and OSG more than likely will have problems when they are used with integrated graphics such as the common Intel 82845 chipset. The first thing to do is to visit the manufactures web site or contact their support channels to obtain their latest graphics driver for the card. Installing the newest graphics driver normally helps to some extent, make sure you select at least 24bit or 32 bits for the colour, Also make sure and allocate as much RAM to the card as possible, you will need at least 64mb the more they support the better, if you only have 32mb then your performance will not be good The performance of the integrated card can will always in most case be a lot worse the a dedicated graphics card as the integrated card in most case use system ram, which slows it down and also place a lot of the processing of graphics commands on the machines normal CPU. To be honest integrated cards are terrible for 3d Real-time graphics, there fine for normal desktop activities but not graphics, the best recommendation I can give is to install a dedicate graphics card, you can get a very reasonable card these days for say $100 or so which will blow away the integrate card.
| |||||||||||||||||||||||||||||||||||||
There can be many reasons that your simulation/application can be running at only 1-2 Hz or less. Typically this indicates that you may have dropped in to software rendering mode on your graphics card. This can happen when you set-up up Opengl and request your pixel format for the Opengl Window. Normally it means you have asked for a format or a setting that the card cannot support or does not support. I say cannot support as it may be that the resources of that card are limited when you request the pixel format, such as your resolution is too big for a 32bit z buffer, another Opengl has already consumed most of the resources etc. Or your requesting a setting not supported by your graphics card, you can find the formats supported by your cards with the following On Irix you can use findvis on the command line to display the available bit plane configurations supported on the Irix system On Windows you can use a program from Nvidia to show the available bit plane configurations Then it might be a case you are trying to In this case you have to try and simply your application, reduce the data, reduce the applications work load, get fast machine, maybe use a multi-process machine, get better graphics, reduce your resolution etc
| |||||||||||||||||||||||||||||||||||||
Each Opengl window uses a frame buffer, which is a collection of bit plane's storing the information about each pixel. The organization of these bit plane's defines the quality of the rendered images and is known as a Pixel Format.
Pixel formats are made up from
different Bit plane's which allocate for features such as: Note that support for the various pixel format configurations and combinations are not uniform across different Windows Graphics cards, Linux Systems and Irix systems. Vega will ask the system for a bit plane specification supplied through the Lynx Windows panel settings or through code, the request may not be granted. When the notification level (in Systems panel) is set to Info or higher, messages tell the user which bit plane configuration is actually being used There are generally two
methods of specifying bit plane configuration. On Irix you can use findvis on the command line to display the available bit plane configurations supported on the Irix system On Windows you can use a program from Nvidia to show the available bit plane configurations http://developer.nvidia.com/object/nvpixelformat.html Color
RGB
Alpha Depth Buffer Z
Bits Samples Stencil Accumulation
| |||||||||||||||||||||||||||||||||||||
Basically the idea behind LOD processing is that objects which are barely visible don’t require a great amount of detail to be shown in order to be recognizable. Object are typically barely visible either because they are located a great distance from the eye point or because atmospheric conditions are obscuring visibility. Both atmospheric effects and the visual effect of perspective minimize the importance of objects at ever increasing ranges from the current observers eye point. The effect is that the perspective foreshortening of objects, which makes them appear to shrink in size as they recede into the distance. To improve performance and to save rendering time, objects that are visually less important in a frame can be rendered with less detail. The LOD approach optimizes the display of complex objects by constructing a number of progressively simpler versions of an object and selecting one of them for display as a function of range. An undesirable effect called popping occurs when the sudden transition from one LOD to the next LOD is visually noticeable. To remedy this SGI graphics
platforms offer a feature known as Fade Level of Detail that smoothes the
transition between LOD's by allowing two adjacent levels of detail to be
sub-sample blended. This is now supported by most Scenegraphs, as long as there
graphics support multi-sampling Here's a link to a Practical overview of an LOD
| |||||||||||||||||||||||||||||||||||||
For the symmetric frustum, both these planes are perpendicular to the line of sight of the viewer. The horizontal and vertical FOV's (fields of view) determine the radial extent of the view into the scene. FOV's are entered as degrees for the full width of the view desired. Entering a -1 for either but not both FOV causes the system to aspect match that FOV axis. For
example suppose the horizontal FOV is 45 degrees and the vertical is
set to -1. Once the window and channel are sized, the system selects
the appropriate FOV degree for the vertical FOV to maintain an aspect
ratio equal to that of the channel viewport.
Symmetric frustum Also see the following Viewing Frustum Overview Image
| |||||||||||||||||||||||||||||||||||||
This type of perspective frustum requires six values to define it. Clicking on the Asymmetric Frustum option displays the six entry fields. The near and far values are the same as the symmetrical frustum. The left, right, bottom, and top values define the side planes of the frustum. They are the angle offset in degrees for the plane they represent. See vpChannel and the Vega Prime Programmers Guide for further details.
Asymmetric frustum Also see the following Viewing Frustum Overview Image | |||||||||||||||||||||||||||||||||||||
The sides of the frustum are parallel to the line of sight of the viewer. The Near and Far distances define the near and far clipping planes. The Left, Right, Bottom, and Top values define the frustum side planes. These values bear a direct relationship to the scale of the object being viewed. See vpChannel and the Vega Prime Programmers Guide for further details. Also see the following Viewing Frustum Overview Image
| |||||||||||||||||||||||||||||||||||||
Isectors provide the ability to handle collision detection between objects within a Scenegraphs and are an essential part of most visual simulations For example, a typical need is to obtain the current Height Above Terrain (HAT) information in a flight simulator or a driving simulator is determined by firing a vertical line segment from the aircraft or vehicle towards the terrain/ground and calculating the distance between the aircraft or vehicle and the intersection point on the ground. Another example is the use of an Isector to pick or selecthings in the scene, this is typically done using an Line of Site (LOS) isector
| |||||||||||||||||||||||||||||||||||||
A line segment in this case is defined by 2 XYZ vectors a Begin and an End position. A vpIsector class such as vpIsectorLOS will position and orientate the line segment. Basically speaking the Isector will traverse its target scene graph and test a nodes bounding spheres against the Line segments. If no intersection is found then the node and all the nodes children are a rejected, this allows for fast collision detection. If an intersection hit is encountered with the bounding sphere the test can them become more fine grained test of each child node for an intersection until the leaf geometry node is reached, then data on the collisions detected can be stored such as pointers to node, position of intersection, the normal perpendicular to the intersection etc. (This is of course an oversimplification of a more complicated process)
| |||||||||||||||||||||||||||||||||||||
60 What are the Differences between Real-time and Animation Applications |
|||||||||||||||||||||||||||||||||||||
Real-time applications are used in application where responding to user input is part of the simulation, for example, during flight training and interactive architectural demonstrations. Both real-time and animation applications simulate real and imaginary worlds with highly detailed models, produce smooth continuous movement, and render at a certain number of frames per second . Some of the main differences are:
| |||||||||||||||||||||||||||||||||||||
Typically you should be able to use a format conversion program such as Polytrans or Deep Exploration. These offer a good selection of import formats and the ability to output OpenFlight models. | |||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||
When I do a glReadPixels and write this out as an image file or to an AVI file, I get other windows captured, why. Presuming that when you do the call to glReadPixel you have other windows overlapping the graphics window, then it is likely that you will see the other windows in your capture Unfortunately This is not so much a platform issue as it is a consequence of the OpenGL specification. Paraphrasing section 4.1.1 "Pixel Ownership Test": ...if a pixel in the frame buffer is not owned by the GL context, the window system decides the fate of the incoming fragment; possible results are discarding the fragment... Note that no mention is made of whether front or back buffer; it's entirely the window system's call. Any code depending on a particular implementation's behaviour is very non-portable. This seem to be more of a problem for Windows users and not as much on X11 based OS's (although not guaranteed). On windows you can force you application to the stay on stop and then glReadPixel will capture just the applications window
| |||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||
66 How Can I Stop My Window being Ontop after using &wndTopMost |
|||||||||||||||||||||||||||||||||||||
How can I stop my OSG window from being on top after using &wndTopMost after using FAQ 65 On Windows this is quite straight forward using the following on your window
| |||||||||||||||||||||||||||||||||||||
Alternatively you can also post changes and submissions under the Community section of the Open Scene Graph web site. This is particular appropriate for complete new functionality such as Node Kits and Plug-ins, you can then inform the world via the osg-users or osg-submissions list of this entry
| |||||||||||||||||||||||||||||||||||||
Particle::setLifeTime(....);
| |||||||||||||||||||||||||||||||||||||
Using IntersectVisitor is more efficient than using GLpick. The reason for this is that GL pick requires a round trip to the graphics pipeline and this is I/O bound and is generally an expensive operation . The IntersectVisitor does ray/line segment intersections very efficiently by means of using the scene graph, node's bounding spheres and trivial rejection, all in the CPU, memory bound operations. Note that what is lacking in the current IntersectVisitor implementation, is the ability to pick lines and points. There are only ray intersections, which can intersect triangles and bounding volumes
| |||||||||||||||||||||||||||||||||||||
"Reference Pointers" can also known as "Smart Pointers", "Auto Pointers", and may have different functionality, but are principally the same thing In brief, "Reference Pointers" are C++ objects that simulate normal pointers by implementing operator-> and the unary operator*. In addition to sporting pointer syntax and semantics, "Reference Pointers" often perform useful tasks such as memory management, reference counting, scope, locking all under the covers thus freeing the application from carefully managing the lifetime of pointed-to objects Here's a link to great right up on Reference Pointers with OSG by Don Burns. this should give you more than enough information to get you by http://dburns.dhs.org/OSG/Articles/RefPointers/RefPointers.html Other Articles by Don can be found here http://dburns.dhs.org/OSG/Articles
| |||||||||||||||||||||||||||||||||||||
To retrieve the current XYZ from Scene Views matrix you can do something along the lines of :
| |||||||||||||||||||||||||||||||||||||
For multiple DOF nodes you need to load your model or grab a pointer to a node then traverse using a custom NodeVistor and call the following function on the node 'doftransform->setAnimationOn( false)' on all the transform nodes found. Code for the node visitor would look something lthe following:
| |||||||||||||||||||||||||||||||||||||
For a single DOF transformation node node you need to find or get a pointer to the node and then simple call the:
For multiple DOF nodes you need to load your model or grab a pointer to a node then traverse using a custom NodeVistor and call the following function on the node 'doftransform->setAnimationOn( true)' on all the transform nodes found. See the code for the node visitor in FAQ 72:
| |||||||||||||||||||||||||||||||||||||
A quick overview of Render bins : During the cull traversal, a Open Scene Graph can rearrange the order in which Geometry is rendered for improved performance and image quality. Open Scene Graph does this by binning and sorting the geometry. Binning is the act of placing Geometry into specific bins, which are rendered in a specific order. Open Scene Graph provides two default bins: The Opaque render bin is drawn before the Transparent render bin so that transparent surfaces can be properly blended with the reset of the scene. Open Scene Graph applications are free to add new render bins and to specify arbitrary render bin orderings and the type of sorting within the render bins them selves A good source of information on Scene Graph, it components, traversal and Render bins can be found in the SGI Performer Online Documentation
| |||||||||||||||||||||||||||||||||||||
Unfortunately currently the answer is No you cannot share scene handlers between cameras. Each camera must have a unique Scene Handler, each Scene Handler will
have its own SceneView and each Scene View's State should have a unique
contextID The Pseudo code goes something like this: foreach camera N cameraN = create Camera; osgProducer::OsgSceneHandler sh = create SceneHandler; sh->getSceneView()->getState()->setContextID(N);; cameraN->setSceneHandler(sh);
| |||||||||||||||||||||||||||||||||||||
The short answer that most of the Open Scene Graph examples and manipultors adhere what is found in most simulation packages which is:
The orientation is imposed by the osgGA matrix (and therefore camera) manipulators. By default the osg core does not impose anything on the OpenGL default which is:
| |||||||||||||||||||||||||||||||||||||
You could possibly :
Also see FAQ 46 How to disable Textures on a osg::Node
| |||||||||||||||||||||||||||||||||||||
To replay a previously recorded animation path in the osgViewer you can do the following osgviewer mymodel.osg -p saved_animation.path
| |||||||||||||||||||||||||||||||||||||
You can find an example of generating a Perlin noise texture in the osgshaders example, it is used for the marble and erode effects. Here's one link to an article on Perlin noise texture usage http://www.engin.swarthmore.edu/~zrider1/advglab3/advglab3.htm Try a google for more links there are many :) http://www.google.com/search?q=perlin+noise+opengl
| |||||||||||||||||||||||||||||||||||||
Producer can be used in a multi-tasking environment to allow multiple Camera's to run in parallel supporting hardware configurations with multiple display subsystems. Threading, Camera synchronization and frame rate control are simplified in the Producer programming interface. Producer Cameras have an internal rendering surface that can be created automatically, programmatically, or provided by the programmer, such that Producer can fit into any windowing system or graphical user interface. Producer manages multiple rendering contexts in a windowing system independent manner. Producer provides a simple, yet powerfully scalable approach for real-time 3D applications wishing to run within a single window to large, multi-display systems. Producer is highly portable and has been tested on Linux, Windows, Mac OSX, Solaris and IRIX. Producer works on all Unix based OSes (including Mac OSX) through the X11 Windowing system, and through the native win32 on Windows. Producer is written with productivity, performance and scalability in mind by adhering to industry standard and employing advanced software engineering practices. Software developers wishing to produce 3D rendering software that can display on a desktop, and move to a large system or clustered system of displays by simply changing a configuration file, can depend on Open Producer to handle all the complexity for them.
| |||||||||||||||||||||||||||||||||||||
Further information on producer can be found on Don Burns web site http://www.andesengineering.com/Producer
| |||||||||||||||||||||||||||||||||||||
osgviewer is a basic scene graph viewing application that is distributed with Open Scene Graph. osgviewer's primary purpose is an example of how to write a simple viewer using the Open Scene Graph API, osgviewer is also functional enough to use as a basic 3D graphics viewer
| |||||||||||||||||||||||||||||||||||||
To print out the command line options available, in a console window type:
| |||||||||||||||||||||||||||||||||||||
Set set the clear or background color in ans Openg Scene Graph application you can use something alone the lines of: ( assuming your using Producer)
| |||||||||||||||||||||||||||||||||||||
85 Windows equivalent of GLX_SAMPLES_SGIS & GLX_SAMPLES_BUFFER_SGIS | |||||||||||||||||||||||||||||||||||||
Q: I'm trying to use multi-sampling andanti-aliasing and want to use GLX_SAMPLES_SGIS & GLX_SAMPLES_BUFFER_SGIS but I cannot find these in GWLExtension.h or any were else on my macine
Firstly note that GLX_SAMPLES_SGIS & GLX_SAMPLES_BUFFER_SGIS are actually SGI (Silicon Graphics) extension and are only available on SGI big Iron What you need to do is look through GWLExtension.h for the equivalent #defines for your graphics driver and operating system Also you can go and check out the Opengl Extension Library hosted at SGI's web site http://oss.sgi.com/projects/ogl-sample/registry In this this case take a loot at the following, which should show you the equivalent #defines that you need
WGL_SAMPLE_BUFFERS_ARB 0x2041
WGL_SAMPLES_ARB 0x2042
| |||||||||||||||||||||||||||||||||||||
Some people are having issues with the Cursor not showing inside of OSG/Producer, Christopher K Burns offered the following work around that has been of help to some users One solution is to force the load of the cursor resource is found in the producer project, specifically the file "RenderSurface_Win32.cpp", change the function _setCursor to read:
Now re-compile and re-link and you hopefully you should once again be able to see the cursor
| |||||||||||||||||||||||||||||||||||||
First thing to check is that texturing is enabled for the scene-graph or the node tree you polygons are attached to, see FAQ 47 on how to enable texturing If you are generating your own geometry then make sure you create or share an osg::StateSet with setTextureAttributeAndModes and give this to your osg::geode Also remember that you must also give your geometry Texture coordinates for the UV's which tell OSG/Opengl how to map the texture on to the polygons, if you have no texture coords then you will not see any texture on the geometry You can create your texture coordinates your self or you can use Opengl and glTexGen to create the Texture coords for you ( See the Opengl Programmers guide for more information on Textures and set-up of texturing state)
| |||||||||||||||||||||||||||||||||||||
This is quite easy to do as the OSG file format is an ASCII based format So you can simply load you file/files into your OSG application write your scene or node out to and OSG file using the .osg plug-in e.g. osgDB::writeNodeFile( *my_node, "my_node.osg") Also you can simply load your files/files in the osgviewer application found in the OSG distribution and then press the "o" key to write the whole SceneGraph ( saves to "saved_model.osg" )
| |||||||||||||||||||||||||||||||||||||
Q: Is there any way to tell OSG to use all of the available video memory or to be able specify the maximum amount of video memory to use? In a nut shell at this time there is no way to do this, this is beyond the control of OSG and would reside at the graphics driver level and currently I'm not aware of any driver ( Opengl/DirectX etc) that offers this ability
| |||||||||||||||||||||||||||||||||||||
90 Can I tell osgconv to use only the lowest LOD's from OpenFlight files | |||||||||||||||||||||||||||||||||||||
Q: is it possible when using osgconv to tell the program that when it encounters a LOD node just to use the lowest level of detail ? Currently with the application osgconv, there is not any direct way to force the application to use only the lowest LOD's. What you could do is to use the LODScale on SceneView/OsgCameraGroup to force the selection of lower LOD levels but this cannot guarantee to force the lowest LOD. To do this you will have write your own code and then traverse the scene graph, modify it manually to remove the LOD's nodes you don't want and then write out your modified model Note you will also need to watch out for additive LOD's as the lowest level here is probably not what you want
| |||||||||||||||||||||||||||||||||||||
This is really very easy if you are using the osgUtil::IntersectVisitor as the base for your intersection testing The osgUtil::IntersectVisitor::Hit stores the normal of the collision in variable _intersectNormal
| |||||||||||||||||||||||||||||||||||||
It is very straight forward to turn a Camera On or Off firest you ned to get a pointer to your camera e.g. Producer::Camera *camera = getPointerToMyCamera(); then to turn the camera On call camera->enable(); else to turn the camera Off call camera->disable();
| |||||||||||||||||||||||||||||||||||||
Q: I cannot seem to find any documentation about Producer::CameraConfig or any examples of how to us a camera config, is there any documentation or examples ? The documents can be found Producer disitrubtion: Producer/doc/CameraConfig.bnf Producer/doc/CameraConfig.example There are also other configuration (.cfg) file examples, look for them in doc/Tutorial/SourceCode/*/*.cfg Also note thst the some of the Producer documentation is only available after you have actually built the software
| |||||||||||||||||||||||||||||||||||||
What you are seeing is not really a bug but is correct for DDS (Direct Draw Surface) textures and depends on how you generated them. As DDS was designed by Microsoft and thus originally intended for Direct X, which has its origin in the upper left corner, while OpenGL has its origin in the lower left corner. Basically you need to tell your DDS generation tool to invert the image,many tools can do this some don't In Open Scene Graph you can post flip the DDS imagery by passing in the ReaderWriter::Options string "dds_flip" into the readImageFile( file, options )
| |||||||||||||||||||||||||||||||||||||
There is no equivalent to glPushing and glPopping of specifically categorized State attributes. The application of state attributes occurs on a lazy state update, after all stateSets have been accumulated in a traversal of the scene graph, in that the stateSet is applied only at the drawable. Pushing and popping of statesets occur during the recursion of the graph traversal. So, If you set a state attribute that affects line width (osg::LineWidth state Attribute) for example , it will be pushed when encountered on the scene graph and popped when returning from the traversal for any subgraph where the StateSet is applied osg::StateSet is a set of both modes and attributes and is something you attach to the scene graph to specificy the OpenGL state to use when rendering. osg::State is a per graphics context object that tracks the current OpenGL state of the that graphics context. It tracks the state in OSG object and mode terms, rather just in totally raw OpenGL terms, with respects to OpenGL modes it does track OpenGL modes directly. osg::State exists to implement lazy state updating, and also as a way of getting current OpenGL state without requiring a round trip to the graphics hardware
| |||||||||||||||||||||||||||||||||||||
You can find a searchable version of the Open Scene Graph archived online at http://dburns.dhs.org/osgarchiver
| |||||||||||||||||||||||||||||||||||||
. | |||||||||||||||||||||||||||||||||||||
. | |||||||||||||||||||||||||||||||||||||
. | |||||||||||||||||||||||||||||||||||||
. | |||||||||||||||||||||||||||||||||||||
© Copyright 2004-2005 Gordon Tomlinson All Rights Reserved. All logos, trademarks and copyrights in this site are property of their respective owner. |