|
"When I put several transparent objects one in front of another or inside another, POV-Ray calculates a few of them, but the rest are completely black, no matter what transparency values I give."
Short answer: Try increasing the max_trace_level
value in the global_settings
block (the default is 5).
Long answer:
Raytracing has a peculiar feature: It can calculate reflection and refraction. Each time a ray hits the surface of an object, the program looks if this surface is reflective and/or refractive. If so, it shoots another ray from this point to the appropriate direction.
Now, imagine we have a glass sphere. Glass reflects and refracts, so when the ray hits the sphere, two additional rays are calculated, one outside the sphere (for the reflection) and one inside (for the refraction). Now the inside ray will hit the sphere again, so two new rays are calculated, and so on and so on...
You can easily see that there must be a maximum number of reflections/refractions calculated, because otherwise POV-Ray would calculate that one pixel forever.
This number can be set with the
max_trace_level
option in the global_settings
block.
The default value
is 5, which is enough for most scenes. Sometimes it isn't enough
(specially when there are lots of semitransparent objects one over
another) so you have to increase it.
So try something like:
global_settings { max_trace_level 10 }
"When I make an image with POV-Ray, it seems to use just a few colors since I get color banding or concentric circles of colors or whatever where it shouldn't. How can I make POV-Ray to use more colors?"
POV-Ray always writes true color images (ie. with 16777216 colors, ie. 256 shades of red, 256 shades of green and 256 shades of blue) (this can be changed when outputting to PNG or to B/W TGA but this is irrelevant when answering to this question).
So POV-Ray is not guilty. It always uses the maximum color resolution available in the target image file format.
This problem usually happens when you are using windows with 16-bit colors (ie. only 65536 colors, the so-called hicolor mode) and open the image created by POV-Ray with a program which doesn't dither the image. The image is still true color, but the program is unable to show all the colors, but shows only 65536 of them (dithering is a method that "fakes" more colors by mixing pixels of two adjacent colors to simulate the in-between colors).
So the problem is not in POV-Ray, but in your image viewer program. Even if POV-Ray shows a poor image while rendering because you have a resolution with too few colors, the image file created will have full color range.
"When I rotate an object, it dissapears from the image or moves very strangely. Why?"
You need to understand how rotation works in POV-Ray.
Objects are always rotated around the axes. When you rotate, for
example, <20,0,0>
, that means that you are rotating
around the X-axis
20 degrees (counter-clockwise). This is independent of the location of the
object: It always rotates around the axis (what's the center of the object
anyways? how do you locate it?). This means that if the object is not
centered in the axis, it will orbit this axis like the Moon orbits the Earth
(showing always the same side to the Earth).
It's a very good practice to define all objects centered at the origin
(ie. its 'center' is located at <0,0,0>
). Then you
can rotate it
arbitrarily. After this you can translate it to its proper location in the
scene. It's a good idea to do this to every object even if you don't rotate
it (because you can't never say if you will rotate it some day nevertheless).
What if, after all, you have a very complex object defined, but its
center is not at the origin, and you want to rotate it around its center?
Then you can just translate it to the origin, rotate it and then translate
it back to its place. Suppose that the center of the object is located at
<10,20,-30>
; you can rotate it this way:
translate -<10,20,-30> rotate <whatever> translate <10,20,-30>
"If I tell POV-Ray to render a square image or otherwise change the aspect ratio, the output image is distorted. What am I doing wrong?"
The problem is that the camera is set to an aspect ratio of 4/3, while the picture you are trying to render has an aspect ratio of 1/1 (or whatever).
You can set the aspect ratio with the 'right' keyword in the camera block. The general way to set the correct aspect ratio for your image dimensions is:
camera { right x*ImageWidth/ImageHeight (other camera settings...) }
This keyword can also be used to change the handedness of POV-Ray (see the question about Moray and POV-Ray handedness for more details).
Note: One could think "why doesn't POV-Ray always set automatically the aspect ratio of the camera according to the resolution of the image?".
There's one thing wrong in this thought: It assumes that pixels are always square (ie. the aspect ratio of the pixels is 1/1). The logic of this behaviour comes clear with an example:
Suppose that you design a scene using a regular 4/3 aspect ratio, as usual (like 320x240, 640x480 and so on). This image is designed to look good when viewing in a 4/3 monitor (as they all are in home computers).
Now you want to render this image for the Windows startup image. The resolution of the Windows startup image is 320x400. This resolution has not an aspect ratio of 4/3 and the pixels are not square (the pixels have an aspect ratio of 1/0.6 instead of 1/1). Now, when you render your image at a resolution of 320x400 with POV-Ray and show it with the monitor set to that resolution (as it is set at windows startup when the startup image is shown), the aspect ratio will be the correct one so the image will have the correct proportions (and it will not be squeezed in any direction).
If you had changed the aspect ratio of the camera to 320/400 (instead of using the default 4/3) you would not only have got a different image (showing parts of the scene not shown in the original or hiding parts visible in the original), but it would have looked sqeezed when shown in the 320x400 screen resolution.
Thus, the camera aspect ratio is the aspect ratio of the final image on screen, when viewed in the final resolution (which might not be a 4/3-resolution). Since the monitor screen has an aspect ratio of 4/3, this is the default for the camera as well.
This is the typical 'coincident surfaces problem'. This happens when two surfaces are exactly at the same place. For example:
union { box { <-1,0,-1>,<1,-2,1> texture { Texture1 } } box { <-2,0,-2>,<2,-1,2> texture { Texture2 } } }
The top surface of the first box is coincident with the top surface of the second box. When a ray hits this area, POV-Ray has to decide which surface is closest. It can't, since they are exactly in the same place. Which one it actually chooses depends on the float number calculations, rounding error, initial parameters, position of the camera, etc, and varies from pixel to pixel, causing those seemingly "random" pixels.
The solution to the problem is to decide which surface you want to be on top and translate that surface just a bit, so it protrudes past the unwanted surface. In the example above, if we want, for example, that the second box is at the top, we will type something like:
union { box { <-1,0,-1>,<1,-2,1> texture { Texture1 } } box { <-2,0.001,-2>,<2,-1,2> texture { Texture2 } } }
Note that a similar problem appears when a light source is exactly on a surface: POV-Ray can't calculate accurately if it's actually inside or outside the surface, so dark (shadowed) pixels appear on every surface that is illuminated by this light.
The only thing that works with a sky_sphere
is pigments.
Textures and
finishes are not allowed. Don't be discouraged though because you can still
use the textures in stars.inc with the following method:
Extract only the pigment statement from the declared textures. For example:
texture { pigment { color_map { [0 rgb ...][.5 rgb ...][1.0 rgb ...] } scale ... } finish { ... } }
becomes:
pigment { color_map { [0 rgb ...][.5 rgb ...][1.0 rgb ...] } scale ... }
The reason for this is that sky_sphere
doesn't have a
surface, it isn't an actual object. It is really just a fancy version of the
background feature which extracts a color from a pigment instead of
being a flat color. Because of this, normal and finish features, which
depend on the characteristics of the surface of an object for their
calculations, can't be used. The textures in stars.inc
were
intended to be mapped onto a real sphere, and can be used something like this:
sphere { 0, 1 hollow // So it doesn't interfere with any media in the scene texture { YourSkyTexture } scale 100000 }
POV-Ray can only apply filter or transmit to 8 bit 256 color palleted
images. Since most .tga
, .png
, and .bmp
images are 24bit and 16 million
colors they do not work with filter or transmit. If you must use filter
or transmit with your image maps you must reduce the color depth to a
format the supports 256 colors such as the .gif
image format.
You might
also check the POV-Ray docs on using the alpha channel of .png
files if you need specific areas that are transparent.
"My isosurface is not rendering properly: there are holes or random noise or big parts or even the whole isosurface just disappears."
The most common reason for these type of phenomena with isosurfaces is
a too low max_gradient
value. Use evaluate
to
make POV-Ray calculate a proper max_gradient
for the isosurface
(remember to specify a sensible max_gradient
even when you use
evaluate
or else the result may not be correct).
Sometimes a too high accuracy
value can also cause problems
even when the max_gradient
is ok. If playing with the latter
doesn't seem to help, try also lowering the accuracy
.
Remember that specifying a max_gradient
which is too
high for an isosurface, although it gives the correct result, is needlessly
slow, so you should always calculate the proper max_gradient
for
each isosurface you make.
Note that there are certain pathological functions where no
max_gradient
or accuracy
will help.
These functions usually have discontinuities or
similar "ill-behaving" properties. With those you just have to find a solution
which gives the best quality/speed tradeoff. Isosurfaces work best with
functions which give smooth surfaces.
|