So, I'm working on a 3d project that uses heavily stylized rendering. My problem is that I need to post-process the normals of every surface on screen. I intend to filter these. Think like a sobel filter but with dot products between the center and the 8 surrounding pixels. The 4 albedo channels just aren't enough, or id use those. I could always pack the colors and normals into 2 channels each using some dumb algorithm. NORMAL_TEXTURE in the canvas item shaders doesnt do what you'd hope it does on a viewport.
Pass the normals please
Yeah... About that. There's currently no way of accessing the screen size normals in a 3D scene in Godot 3.2. :(
I assume you mean, not without doing so instead of the albedo. I've been trying to devise a decent workaround.
The only work around would be to approximate it using the DEPTH_TEXTURE
. This would not take into account normal maps, and smooth shading. That's all you've got for now.
But you cant write to the depth texture. Although you can write to the depth. But thats just a single float. Its still one channel more. The stylization I'm developing basically throws any built in light processing out the window. Thank you though. I really appreciate the help.
I have worked out a workaround. What you do is set up an extra camera as a child of it's own viewport inside a viewport container. Turn the camera 180 degrees on the z-axis and set the scale of the viewport container to (-1, -1) to reverse the rotation. In the shader of each object pass in the transform of both cameras. You need to run a test in the vertex function to tell which camera is being drawn. Using this information you can draw the normals as the albedo whenever the normal-camera is being drawn. You can now use the extra viewport texture for post processing. Congrats.
Insightful! Good work. ? How can the shader tell which camera's being used if you don't mind me asking.
- Edited
The camera for the normals is flipped upside down. This means that CAMERA_MATRIX will be different for that camera. CAMERA_MATRIX is then compared in the vertex function with the camera transforms which you pass in by uniform (in gdscript). I tested this all and it all works. Edit: for clarity
- Edited
I should mention that ive run a sobel over the normals with depth in the alpha channel
@UnknownUser What code do you use to do the comparison of the matrices in the vertex function? I was trying to replicate your solution but got stuck on that bit.
- Edited
@Jick - Unknown user posts are all posts that were lost but later recovered, as mentioned here. I wish I could remember who made the original post prior to it being lost...
Regardless, helpfully whomever was the original poster will see this and chime in :+1:
@TwistedTwigleg Oh I see. Thanks for the info! Yeah, hopefully an answer will come along. :)
@UnknownUser said: I have worked out a workaround. What you do is set up an extra camera as a child of it's own viewport inside a viewport container. Turn the camera 180 degrees on the z-axis and set the scale of the viewport container to (-1, -1) to reverse the rotation. In the shader of each object pass in the transform of both cameras. You need to run a test in the vertex function to tell which camera is being drawn. Using this information you can draw the normals as the albedo whenever the normal-camera is being drawn. You can now use the extra viewport texture for post processing. Congrats.
It is so cool! I have the same idea but don't know how to do it. You are genius! And do you have sample code to achieve your result? Thanks!