The only thing that I have no idea how to achieve is to get information from the tilemap so I add outlines correctly. I thought of generating a texture and passing it to the shader but then, how would I be able to know the cell of the tile that is being drawn (so I can access the right coord of the texture)?
@camelCase For square mode tilemaps it's pretty straightforward. It might be a bit trickier for isometric tilemaps, but possibly not too much.
The parts I know you need are:
- The data texture created with some defined mapping from tile map cells to pixels, and a transform from tile map cell coordinates to pixels. This is just the identity transform if you don't have any negative coordinates, but otherwise you'll need to translate from the tile map used rect to the texture pixel space.
- A transform from world-space coordinates to tile map cells. You'll need to set this up for your map, but is basically just a transform doing what
TileMap#world_to_map()
does, in terms of your tile origins.
- A transform from shader vertex-space coordinates to world-space coordinates. This arguable should be builtin, but the CanvasItem shader vertex built-ins docs mentions the work-around.
- Chain together the transforms at the most convenient point, and pass-in whichever combination you intend to use in the shader as another shader param, and use it to map screen coordinates to cell-space coordinates, and then use
texelFetch()
to grab your data out.
Here's an example in a demo project I just posted, where map_transform
combines the map-space transforms, the shader parameter seen_transform
combines all the transforms needed to go from my rendering vertex space coordinates to cell space coordinates, and seen_uv
is the final shader texel-space coordinate used to look up in the texture.
The big wrinkle for your case is that I'm not certain the best way to handle overlapping and variable-height isometric tiles. I haven't done anything with isometric tiles, so this is potentially entirely speculative. I'm guessing that the tile origin is the bottom left, and that you then end up with a regular tiling of the bases which allows TileMap#world_to_map()
to work in terms of those bases? If that's true, then I think the only extra thing you need is to know how high you are in the current tile so that you can transform your initial vertex to the tile base coordinate, then do the rest of the lookup mapping in terms of that.
To accomplish that, I believe you can use the fact that the fragment shader UV
coordinate is in the texture-space of the tile backing texture/atlas. If the arrangement of tiles in your tileset texture(s) is very regular, you could probably do a simple shader-code mapping from any UV
to the tile bottom. If not, you could create yet another data texture which lets you map from tile UV
to tile height -- assuming each tile width is the same, you could get away with a one pixel wide strip for each tile column, even if the heights are arbitrary.
The last potential gotcha if you haven't done data texture mappings before is that I've found that even though the Godot shader language supports isampler2D
and usampler2D
, the engine doesn't seem to expose any way to setup textures to be read as integers. So if you use something like FORMAT_R8
for the image and pack your data values in by setting the pixels to something like Color8(mask, 0, 0)
, then in shader code you'll need to extract the value as int mask = int(255.0 * texelFetch(mask_texture, iuv, 0).r)
. The underlying data is stored as 8-bit integer values, and floating point can exactly represent all of the fractional results, so there's no lossiness. It's just a bit fiddly.
Hope that helps!