reading the future
-
My boss just asked for a detailed list of one hundred projects we each plan on completing within the next five years, including completion time.
PIEBALDconsult wrote:
My boss just asked for a detailed list of one hundred projects we each plan on completing within the next five years, including completion time.
Almost kind of like being asked to come up with the next batch of winner lottery numbers :->
-
My boss just asked for a detailed list of one hundred projects we each plan on completing within the next five years, including completion time.
PIEBALDconsult wrote:
My boss just asked for a detailed list of one hundred projects we each plan on completing within the next five years, including completion time.
Luckily no one demands that. But they don't go far from. I am constantly asked what my professional opinion is on the direction of rendering technology as far out as 10 years. Ten years is an incredibly long time in computer terms.
_________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
-
Anton Bentzen wrote:
but basically more power equals more polygons which equals finer detail. Nothing new.
Actually, most graphical artists recognize that the power is not in the polygons, though polygons are part of the equation. A more accurate translation of what she said would be: "Human skin is a translucent object requiring depth. To render depth based images properly we must go beyond simple triangle rendering. We rendered multiple light path normal maps (bump maps) and combining a sequence of layering images and processing percentages of light passing or reflecting we create realistic scattering details to include even the light that goes below the triangle surface, below the simulated skin." basically she is saying we have a whole bunch of floating point power, a whole bunch of texture memory, a whole bunch of parallel processes, on top of the usual triangles, and we found a way to put them all together into a good live image. Subsurface illuminated rendering paths is still something of a frontier in graphics. :)
_________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
Thanks for clearing that up. As I said, I don't know much about the tech behind graphics. :)
Anton Bentzen Denmark
-
I have often made jokes about reading the future, because I am constantly asked to do so at work. About 3+ years ago I was asked to predict human rendering quality. I said that we would know shortly before 2010 if we were headed in the right direction, but probably shortly after 2010. Well, I guess I was a few years off, looks like we should have realistic rendering of a real person (as apposed to a fictitious person which has the advantage that we don't already know how they should look). http://www.techeblog.com/index.php/tech-gadget/video-nvidias-amazing-human-head-demo[^] Now there is no motion, no muscular->skin elasticity effects, so we are not there yet. It will be interesting to see if we make it before 2010, but it sure looks that way. Of course the devil is in the details, it may only get harder from here. :) I need to find someone who carries the old movie "Looker" hmmmm....
_________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
Ummm... If you speak of real-time rendering, we're not there yet, but offline renderers have been able to deliver photo-realistic results like that for YEARS now. Take a look at v-ray, brazil r/s, finalRender, Maxwell or any bigger 3d site's gallery like 3dm3.com or 3dkingdom.org. Take a look at this for example: http://www.3dm3.com/forum/articles.php?action=viewarticle&artid=81[^] Enjoy. ;)
____________________________ I didn't know what to put in here.
-
Ummm... If you speak of real-time rendering, we're not there yet, but offline renderers have been able to deliver photo-realistic results like that for YEARS now. Take a look at v-ray, brazil r/s, finalRender, Maxwell or any bigger 3d site's gallery like 3dm3.com or 3dkingdom.org. Take a look at this for example: http://www.3dm3.com/forum/articles.php?action=viewarticle&artid=81[^] Enjoy. ;)
____________________________ I didn't know what to put in here.
Adis H. wrote:
If you speak of real-time rendering, we're not there yet
yeah, sorry... I was speaking of real-time rendering. Live rendering of human accurate models indistinguishable and as fully dynamic as the real thing. There is a virtual girl pinup coffee table art-book available at your favorite book-store for special order. It has from surrealistic to realistic raytrace renderings (The better ones are BRDF/BTDF hybrid or full photon tracings). But I can't post a link in the lounge.... It sells rapidly at Siggraph, usually sold-out in the first day or two. -- modified at 12:18 Sunday 6th May, 2007
_________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
-
Adis H. wrote:
If you speak of real-time rendering, we're not there yet
yeah, sorry... I was speaking of real-time rendering. Live rendering of human accurate models indistinguishable and as fully dynamic as the real thing. There is a virtual girl pinup coffee table art-book available at your favorite book-store for special order. It has from surrealistic to realistic raytrace renderings (The better ones are BRDF/BTDF hybrid or full photon tracings). But I can't post a link in the lounge.... It sells rapidly at Siggraph, usually sold-out in the first day or two. -- modified at 12:18 Sunday 6th May, 2007
_________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
I see you know your raytracing. ;) And as far as realism goes the best results I've seen are from the Maxwell renderer, but that doesn't come as a surprise when you consider that it's a path tracer (IIRC). Imagine the rendering times. ;) Second to that is Vray with its irradiance mapping.
____________________________ I didn't know what to put in here.
-
I see you know your raytracing. ;) And as far as realism goes the best results I've seen are from the Maxwell renderer, but that doesn't come as a surprise when you consider that it's a path tracer (IIRC). Imagine the rendering times. ;) Second to that is Vray with its irradiance mapping.
____________________________ I didn't know what to put in here.
Adis H. wrote:
I see you know your raytracing.
since the david thomas raytracer. :) google that, and my age will really show.... BRDF/BTDF global illumination is kind of a hobby of mine. Though I always get the latter acronym wrong. :doh: -- modified at 12:16 Sunday 6th May, 2007
_________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
-
Adis H. wrote:
I see you know your raytracing.
since the david thomas raytracer. :) google that, and my age will really show.... BRDF/BTDF global illumination is kind of a hobby of mine. Though I always get the latter acronym wrong. :doh: -- modified at 12:16 Sunday 6th May, 2007
_________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
-
I did a graduate paper on raytracing and global illumination principles the other day. Almost every page had a big picture on it to "explain" the subject better. ;)
____________________________ I didn't know what to put in here.
Adis H. wrote:
I did a graduate paper on raytracing and global illumination principles the other day. Almost every page had a big picture on it to "explain" the subject better.
That is the smart way to do it. something like http://www.gdconf.com/conference/archives/2004/hoffman_naty.doc[^] that gets dry when all you have is light diagrams and formulas. I did a graduate paper on augmented reality (blending virtual/live images) which included much of the same, but it was for the contract lead. I did the work, he retyped it, he got the degree. :) When I found out what it was for, I made him put it in writing exactly what I was required to provide to him, down to specifics so that if anyone ever cared to look, the paper trail was there. :) I can say my work earned a Masters, it just wasn't mine. :doh:
_________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
-
I have often made jokes about reading the future, because I am constantly asked to do so at work. About 3+ years ago I was asked to predict human rendering quality. I said that we would know shortly before 2010 if we were headed in the right direction, but probably shortly after 2010. Well, I guess I was a few years off, looks like we should have realistic rendering of a real person (as apposed to a fictitious person which has the advantage that we don't already know how they should look). http://www.techeblog.com/index.php/tech-gadget/video-nvidias-amazing-human-head-demo[^] Now there is no motion, no muscular->skin elasticity effects, so we are not there yet. It will be interesting to see if we make it before 2010, but it sure looks that way. Of course the devil is in the details, it may only get harder from here. :) I need to find someone who carries the old movie "Looker" hmmmm....
_________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
Almost 10 years ago, I predicted voice recognition would replace keyboards and mice in 3 to 5 years. Still waiting as I type this...... :doh:
-
Almost 10 years ago, I predicted voice recognition would replace keyboards and mice in 3 to 5 years. Still waiting as I type this...... :doh:
Ed Gadziemski wrote:
Almost 10 years ago, I predicted voice recognition would replace keyboards and mice in 3 to 5 years. Still waiting as I type this......
hehehehe, yup. Someone predicted that to me back when we were buying touch screens. He said we should hold off the purchases, because by the end of the project voice recognition would replace keyboards.... I looked at him squarely.... and said, yup... I think it will go something like this: operator: "Hey, Fred, lets go to lunch." computer: "preparing to launch" operator: "abort! abort! abort!!" I actually do believe voice recognition will suppliment in some niche jobs, but the problem is voice areas overlap and confuse computers. We will first have to achieve AI decision association to accurately identify who is talking to whom. But in the mean time, I have used speech recognition and text to speech. I even wrote a phoneme based programming language after watching the movie Dune in college. Compiler class + Mentats == phoneme based programming language. But it was lost with LANA when my brother tossed my disks in the water. All things in time. :) The time is simply not here yet.
_________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
-
I have often made jokes about reading the future, because I am constantly asked to do so at work. About 3+ years ago I was asked to predict human rendering quality. I said that we would know shortly before 2010 if we were headed in the right direction, but probably shortly after 2010. Well, I guess I was a few years off, looks like we should have realistic rendering of a real person (as apposed to a fictitious person which has the advantage that we don't already know how they should look). http://www.techeblog.com/index.php/tech-gadget/video-nvidias-amazing-human-head-demo[^] Now there is no motion, no muscular->skin elasticity effects, so we are not there yet. It will be interesting to see if we make it before 2010, but it sure looks that way. Of course the devil is in the details, it may only get harder from here. :) I need to find someone who carries the old movie "Looker" hmmmm....
_________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
I need to find someone who carries the old movie "Looker" hmmmm.... you can pick up a copy at Amazon.com [^] Steve
-
Almost 10 years ago, I predicted voice recognition would replace keyboards and mice in 3 to 5 years. Still waiting as I type this...... :doh:
Vista's VR works pretty slick!
Rocky <>< Latest Code Blog Post: SilverlightCity blog running! Latest Tech Blog Post: Joost invites at 999!
-
I have often made jokes about reading the future, because I am constantly asked to do so at work. About 3+ years ago I was asked to predict human rendering quality. I said that we would know shortly before 2010 if we were headed in the right direction, but probably shortly after 2010. Well, I guess I was a few years off, looks like we should have realistic rendering of a real person (as apposed to a fictitious person which has the advantage that we don't already know how they should look). http://www.techeblog.com/index.php/tech-gadget/video-nvidias-amazing-human-head-demo[^] Now there is no motion, no muscular->skin elasticity effects, so we are not there yet. It will be interesting to see if we make it before 2010, but it sure looks that way. Of course the devil is in the details, it may only get harder from here. :) I need to find someone who carries the old movie "Looker" hmmmm....
_________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
El Corazon wrote:
I need to find someone who carries the old movie "Looker" hmmmm....
That body scanning scene.... :) Actually that film is a good example of people getting predictions hillariously wrong. In the film they digitise actors (and then murder them) and use their digital proxies to make adverts - BUT they composite the people on real footage of empty film sets. Everyone knows that it's much easier to computer render the set than the people so why are they filming empty sets with motion control cameras? I guess the director thought people just sitting infront of banks of computers would make for a boring conclusion to the film :)