Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Psp2i Model Fomats


kion
 Share

Recommended Posts

It looks like we've managed to cross of three properties defined in the header so far, the materials, the vertex groups, and the (unused) indices. Which means we have two more properties to trace through, the bones and what I think should be the draw calls. We'll do the bones because the come next in order.

416956731_Screenshotfrom2019-12-2123-48-54.png.ee297b9c447618a2b9d8d65184d352d4.png

Using our picture of the header, we have 0x16 bones, with a depth of 0x09 at the offset 0x10. To be honest I'm not sure if the depth value is really needed. If they're doing the bones how I hope they're doing the bones, then the values should be relative to the T-pose and not to the origin, where we would have to multiply by the inverse transpose of the parent's rotation matrix to get the values relative to the T-pose. At best guess for the depth is that it's probably used for some kind of malloc statement some push-pop-matrix. But they shouldn't even need to use that if they allocate a 100-length struct-size for the matrix stack. Rambling aside, those are the values. We know that the bone definition starts at 0x10, we know there are 0x16 bones, and we know the next data to follow the bones in the file order is the material at 0xc80, means that we should have a per-bone struct size of about 0x90 bytes. I guess we should go ahead and take a look.

188714970_Screenshotfrom2019-12-2223-00-49.png.20f6b7f1984cddeb93ac3634cbd827d1.png

Here's an isolated bone struct, and we can write a definition

struct unj_bone_t {
  uint32_t flags;
  uint16_t bone_id;
  uint16_t parent_id;
  uint16_t child_id;
  uint16_t sibling_id;
  float position[3];
  int32_t rotation[3];
  float scale[3];
  float transform[16];
  float bound_sphere[4];
  int32_t unknown;
  float half_dimensions[3];
}

Not sure how close that is to 0x90 as I'm too lazy to check. But the stride is 0x90 and then the definition is as above. Though I thought the bone's parent and child would be in the bone definition, so if it's not that means there's still a bunch of information that needs to be defined. We still need the weight of the bone influences on the vertex groups. We need the material used when drawing a specific group of vertices. And we need the parent and child relationship for the bones. So I wonder if all of that will be covered in the last draw call property.

Edit:

Fixed, the parent/child relationship with the bones. And the vertex weights should be declared directly in the vertex list. So the last thing should be draw calls of pairing a specific range of vertices in a vertex group with a material to be rendered.

Edited by kion
Link to comment
Share on other sites

Time to start looking into the draw call property. And since the draw calls are grouped at the end of the file that means I can include everything as one image.draw_calls_aaaa.thumb.PNG.6ed44a8380ed3d995f6ec015b36a6a24.PNG

In the blue at the very bottom we have our friendly neighborhood file header. And the number of drawcall groups is 0x01 at offset 0x5f04 labelled as ①. What we would expect to have here is a pairing between a vertex group with a specific material number. In this case it looks like we have another redundant reference to a struct before we get the actual group calls. So let's take a look at the parent struct which is colored in with off-red.

struct unj_drawgroups_t {
  uint8_t unknown_byte_1;
  uint8_t unknown_byte_2;
  uint16_t unknown_short_1;
  uint32_t direct_draw_count;
  uint32_t direct_draw_ofs;
  uint32_t indexed_draw_count;
  uint32_t indexed_draw_ofs;
}

We have a pretty short 5 dword struct. The first dword looks like we have two bytes and some kind of short value, but it's not known what they are used for. The next dword is an int value for the actual number of draw calls, followed by the offset to the first draw call. And last we have a single count to a single pointer that has a value of zero. I think what's actually going on here is another difference between unj and xnj. Unj uses direct buffer geometry (triangle strips are generated from the vertices directly), where as xnj uses indexed geometry. So I think the reason there are two different pointers here is one is for direct draw called and the other is for indexed calls.

Last we can look into the direct draw call where we have a single struct outlined at the top of the yellow section. The struct array starts at 0x5ce4, and then ends at 0x5f00, which is a length of 0x21c. And then we have 15 (0x0f) draw calls, so we divide to get an individual struct size of 0x24 or specifically 9 dwords. So let's take a look at the struct definition.

struct unj_direct_call_t {
  float center[3];
  float radius;
  uint32_t top_level_bone;
  uint32_t unknown_int1;
  uint32_t material_group;
  uint32_t vertex_group;
  uint32_t unknown_int2;
}

And here we have it. Basically all we need to know is which material groups are paired with what vertex groups. And I think the reason they did it this way is because of the way they defined the vertex weights. Each vertex group can only have up to four bone influences as opposed to having a bone id and influence declared in the vertex list directly. So that means that any group of vertices is going to be split down in segments of four bones for draw calls. Which to put it mildly is a pretty stupid way to do something like this. It means that a model with only 2 materials is going to need 15 draw calls because of your stupidly.

The good news is that we've effectively traced out the entire file, so the next post will be about how we take what we've documented and formulate an approach for parsing the model data.

Link to comment
Share on other sites

Now that we have a decent idea of what's going on in the file, we can start drafting out an approach to parse the model, and then try to find strategies for exporting the models. To start out we'll try to make a class that can take a unj file as an argument and then parse the contents to be exported as a different format.

class UnjReader {
  
  constructor () {
    this.materials = []
    this.vertex_groups = [];
    this.bones = [];
    this.direct_calls = [];
  }
  
  parse(arraybuffer) {
    if(arraybuffer.byteLength < 16) {
      return false;
    }
    
    this.view = new DataView(arraybuffer);
    const MAGIC_NUOB  = 0x424f554e;
    const magic = this.view.getUint32(0x00, true);
    const length = this.view.getUint32(0x04, true);
    const header_ofs = this.view.getUint32(0x08, true);
    const three = this.view.getUint32(0x0c, true);
    
    if(magic !== MAGIC_NUOB) {
      return false;
    }
    
    if(length !== arraybuffer.byteLength - 8) {
      return false;
    }
    
    if(header_ofs > arraybuffer.byteLength) {
      return false;
    }
    
    this.header_ofs = header_ofs;
    
    if(three !== 3) {
      console.warn("Omg, is this not a three?")
    }
    
    this.readheader();
    this.readMaterials();
    this.readVertexGroups();
    this.readBones();
    this.readDrawCalls();
    
    return true;
  }
  
  readHeader() {
   // pending implementation 
  }
  
  readMaterials() {
   // pending implementation 
  }

  readBones() {
   // pending implementation 
  }
  
  readVertexGroups() {
   // pending implementation 
  }
  
  readDrawCalls() {
   // pending implementation 
  }
  
}

We use Unj as a prefix, because it can get confusing if we generally refer to "models" as there could potentially be different formats. And then for the suffix we have the option of "loader", "parser" or "reader". A loader would mean that we include the fetch operation in the program, but in this case we expect an arraybuffer as an argument which means the file has already been fetched. And then for a "parser", is the name of the function we call to actually implement the act of reading the values. So we'll go ahead and call the class a UnjReader as right now we're mainly focused on reading the values as-is and then decide what to do with them later.

For the constructor, we create a few arrays of information that will be populated as we read it from the file. And then parse is function we call to read the values. In the function we have a few sanity checks that return false if it doesn't look like the file can be parsed and then true is the file was able to be read (even if the content turns out to be empty). After that we read the header and we have four properties, materials, vertex groups, bones and draw calls to read from the file. So we can start by implementing each one of these functions.

Edited by kion
Link to comment
Share on other sites

First step is to read the header.

readHeader() {
  const ofs = this.header_ofs;
  
  const unj_header_t = {
    center : {
      x : this.view.getFloat32(ofs + 0x00, true),
      y : this.view.getFloat32(ofs + 0x04, true),
      z : this.view.getFloat32(ofs + 0x08, true),
    },
    radius : this.view.getFloat32(ofs + 0x0c, true),
    material_count : this.view.getUint32(ofs + 0x10, true),
    material_ofs : this.view.getUint32(ofs + 0x14, true),
    vertex_group_count : this.view.getUint32(ofs + 0x18, true),
    vertex_group_ofs : this.view.getUint32(ofs + 0x1c, true),
    index_group_count: this.view.getUint32(ofs + 0x20, true),
    index_group_ofs : this.view.getUint32(ofs + 0x24, true),
    bone_count : this.view.getUint32(ofs + 0x28, true),
    bone_tree_depth : this.view.getUint32(ofs + 0x2c, true),
    bone_ofs : this.view.getUint32(ofs + 0x30, true),
    draw_count: this.view.getUint32(ofs + 0x34, true),
    draw_ofs : this.view.getUint32(ofs + 0x38, true)
  };
  
  this.header = unj_header_t;
}

Pretty straight forward. We 'seek' to the header location in the file. Read the struct, and then save the struct to a location in the instance's memory so we can reference the various counts and offsets in other functions.

Link to comment
Share on other sites

I guess now I get to start dipping into my sanity meter for the materials. I'm better with geometry than materials, so if needed I may have to come back and revisit this function to address anything that might be incomplete the first pass, but we'll go ahead and fill in what we can.

readMaterials() {
  
  const mat_count = this.header.material_count;
  
  // First we read the number of textures in each material and then get the offset
  // to each of the respective material definitions
  
  for(let i = 0; i < mat_count; i++) {
    const ofs = this.header.material_ofs;;
    const unj_matlist_size = 8;
    const unj_matlist_t = {
      diffuse_texture_count : this.view.getUint8(mat_ofs + i*unj_mat_size + 0x00),
      effect_texture_count : this.view.getUint8(mat_ofs + i*unj_mat_size + 0x01),
      material_ofs : this.view.getUint32(mat_ofs + i*unj_matlist_size + 0x04)
    }
    this.materials[i] = unj_matlist_t;
  }
  
  // Then we 'seek' to each one of the respective material definitions, and then we
  // read the values of all of the respective properties defined
  
  for(let i = 0; i < mat_count; i++) {
    const ofs = this.materials[i].material_ofs;
  
  }
  
}

The whole function ended up being kind of large. I guess i'll keep it here.

Edited by kion
Link to comment
Share on other sites

Next step vertex groups, I guess it should look something like this:

readVertexGroups() {

	// First we 'seek' to the vertex group definitions and read the
	// uv count and the offset to the vertex list definition
	
	let ofs = this.header.vertex_group_ofs;
	
	for(let i = 0; i < this.header.vertex_group_count; i++) {
		const unj_vertex_group_t = {
			uv_count : this.view.getUint32(ofs + 0, true),
			group_ofs : this.view.getUint32(ofs + 4, true)
		}
		this.vertex_groups[i] = unj_vertex_group_t;
		ofs += 8;
	}
	
	// Then we seek to each one of the vertex list definitions and
	// read the number of bone influences, which bones, how many vertices
	// and the format of the stored vertices in the vertex list
	
	for(let i = 0; i < this.header.vertex_group_count; i++) {
		
		ofs = this.vertex_groups[i].group_ofs;
		
		const unj_vertex_list_t = {
			unkown_1 : this.view.getUint32(ofs + 0x00, true),
			vertex_format : this.view.getUint32(ofs + 0x04, true),
			unknown_2 : this.view.getUint32(ofs + 0x08, true),
			unknown_3 : this.view.getUint8(ofs + 0x09),
			vertex_length : this.view.getUint8(ofs + 0x0a),
			nop : this.view.getUint12(ofs + 0x0e, true),
			vertex_count_ofs : this.view.getUint32(ofs + 0x10, true),
			vertex_list_ofs : this.view.getUint32(ofs + 0x14, true),
			bone_binding_ofs : this.view.getUint32(ofs + 0x18, true),
			bone_binding_count : this.view.getUint32(ofs + 0x1c, true),
			total_vertex_count : this.view.getUint32(ofs + 0x20, true),
			unknown_4 : this.view.getUint32(ofs + 0x24, true),
			unknown_5 : this.view.getUint32(ofs + 0x28, true),
			vertex_scale : this.view.getFloat32(ofs + 0x2c, true)
		};
		
		// Read the number of vertices in the vertex list
		
		ofs = unj_vertex_list_t.vertex_count_ofs;
		unj_vertex_list_t.vertex_count = this.view.getUint32(ofs, true);
		
		// Read each of the bone influences for the group 
		
		unj_vertex_list_t.bones = new Array();
		ofs = unj_vertex_list_t.bone_binding_ofs;
		for(let k = 0; k < unj_vertex_list_t.bone_binding_count; k++) {
			unj_vertex_list_t.bones[k] = this.view.getUint32(ofs, true);
			ofs += 4;
		}
		
		// Then we save all of the values from the struct into instance memory
		
		for(let key in unj_vertex_list_t) {
			this.vertex_groups[i][key] = unj_vertex_list_t[key];
		}
	}
	
	// Then we seek to the vertex list and read the values for the vertices

}

The whole thing is pretty long. I guess I'll stash it here. So we have the material and vertex groups, which should be the longer ones. So next up we're reading the bones and then the draw calls.

Edited by kion
Link to comment
Share on other sites

Read bones function. Pretty short and simple.

readBones() {
	
	let ofs = this.header.bone_ofs;
	for(let i = 0; i < this.header.bone_count; i++) {
	
		const unj_bone_t = {
			flags : this.view.getUint32(ofs + 0x00, true),
			bone_id : this.view.getUint16(ofs + 0x04, true),
			parent_id : this.view.getUint16(ofs + 0x06, true),
			child_id : this.view.getUint16(ofs + 0x08, true),
			sibling_id : this.view.getUint16(ofs + 0x0a, true),
			position : {
				x : this.view.getFloat32(ofs + 0x0c, true),
				y : this.view.getFloat32(ofs + 0x10, true),
				z : this.view.getFloat32(ofs + 0x14, true)
			},
			rotation : {
				x : this.view.getInt32(ofs + 0x18, true),
				y : this.view.getInt32(ofs + 0x1c, true),
				z : this.view.getInt32(ofs + 0x20, true)
			},
			scale : 
				x : this.view.getFloat32(ofs + 0x24, true),
				y : this.view.getFloat32(ofs + 0x28, true),
				z : this.view.getFloat32(ofs + 0x2c, true)
			},
			transform[16] = [
				this.view.getFloat32(ofs + 0x30, true),
				this.view.getFloat32(ofs + 0x34, true),
				this.view.getFloat32(ofs + 0x38, true),
				this.view.getFloat32(ofs + 0x3c, true),
		
				this.view.getFloat32(ofs + 0x40, true),
				this.view.getFloat32(ofs + 0x44, true),
				this.view.getFloat32(ofs + 0x48, true),
				this.view.getFloat32(ofs + 0x4c, true),
				
				this.view.getFloat32(ofs + 0x50, true),
				this.view.getFloat32(ofs + 0x54, true),
				this.view.getFloat32(ofs + 0x58, true),
				this.view.getFloat32(ofs + 0x5c, true),
				
				this.view.getFloat32(ofs + 0x50, true),
				this.view.getFloat32(ofs + 0x54, true),
				this.view.getFloat32(ofs + 0x58, true),
				this.view.getFloat32(ofs + 0x5c, true)
			],
			bound_sphere : {
				x : this.view.getFloat32(ofs + 0x60, true),
				y : this.view.getFloat32(ofs + 0x64, true),
				z : this.view.getFloat32(ofs + 0x68, true),
				r : this.view.getFloat32(ofs + 0x6c, true)
			},
			unknown : this.view.getInt32(ofs + 0x70, true),
		 	half_dimensions : {
				x : this.view.getFloat32(ofs + 0x74, true),
				y : this.view.getFloat32(ofs + 0x78, true),
				z : this.view.getFloat32(ofs + 0x7c, true)
			}
		}

		this.bones[i] = unj_bone_t;
		ofs += 0x90;
	}
}

 

Link to comment
Share on other sites

Okay, and now we read the draw calls.

readDrawCalls() {

	let ofs = this.header.draw_ofs;
	const groups = [];
	
	for(let i = 0; i < this.header.draw_count; i++) {
	
		const unj_drawgroups_t = {
			unknown_byte_1 : this.view.getUint8(ofs + 0x00),
			unknown_byte_2 : this.view.getUint8(ofs + 0x01),
			unknown_short_1 : this.view.getUint16(ofs + 0x02, true),
			direct_draw_count : this.view.getUint32(ofs + 0x04, true),
			direct_draw_ofs : this.view.getUint32(ofs + 0x08, true),
			indexed_draw_count : this.view.getUint32(ofs + 0x0c, true),
			indexed_draw_ofs : this.view.getUint32(ofs + 0x10, true)
		}
		
		groups[i] = unj_drawgroups_t;
		ofs += 0x14;
		
	};
	
	groups.forEach(group => {
		
		ofs = group.indexed_draw_ofs;
		for(let i = 0; i < group.indexed_draw_count; i++) {
			
			const unj_direct_call_t = {
				center : {
					x : this.view.getFloat32(ofs + 0x00, true),
					y : this.view.getFloat32(ofs + 0x04, true),
					z : this.view.getFloat32(ofs + 0x08, true),
				},
				radius : this.view.getFloat32(ofs + 0x0c, true),
				top_level_bone : this.view.getUint32(ofs + 0x10, true),
				unknown_int1 : this.view.getUint32(ofs + 0x14, true),
				material_group : this.view.getUint32(ofs + 0x18, true),
				vertex_group : this.view.getUint32(ofs + 0x1c, true),
				unknown_int2 : this.view.getUint32(ofs + 0x20, true)
			}
			ofs += 0x24;
			this.direct_calls.push(unj_direct_call_t);
		}
		
	});
	
}

Now that we can read the values from the file, next we can start thinking about how the file is structured and then try to think of approaches for exporting this as a different file format. Full source for the reader is here.

Edited by kion
Link to comment
Share on other sites

Now that we can read the values from the file, we can think about an approach for trying to read and export the files. Ideally to be lazy I'd attempt to try and convert files on the command line. I think the dash model format is one of the easier formats to work with directly, but I'm not sure it's flexible enough to handle the materials and textures. The best approach is likely going to be to use threejs to parse the files into a mesh, and then the models can be exported from there.

In terms of approach I think it would be easiest to have the assets bundled as nbl files. That way I load grab the entire archive, get the unj model file, get the textures, and get the bones and texture names to be able to pass into the model parser. This will later mean that I will likely need a better approach for fbp, and also eventually the alternative compression format. But for right now there's generally enough content to work with.

For the model specifically there are two aspects that I'm not exactly sure how will work out. One is the multiple textures is not something I'm spent time with before. So I'm not sure if that will be multiple textures mapped to diffuse, or bump. So we'll see how that turns out. And the second is how the vertex groups will map to draw calls. I think groups are stored as triangle strips. So I think I might need to store each group as it's own strip, track the start and length. And then for each draw call reference the strip and assign a material. If possible I might be able to reduce the number of calls if I can group consecutive material calls together.

Link to comment
Share on other sites

You can choose whatever model file format you want;  you may have to go digging through someone's SDK for the information to write them correctly, though.

DAE and FBX both have text formats, and store rigging and animation data.  FBX does have the capability to store textures within the file.

Lightwave uses a pair of files : LWO and LWS to store models (LWO) and animation/rigging (LWS).  Images are stored external to the model.

Blender .blend files contain everything, I believe.

.obj only stores model information;  on the plus side, it's well understood, and in Ascii.  Daz3D and Poser store rigging information in a .cr2 file, which contains rigging, morphs, animation data, etc.

microsoft .x is well documented and may work well, but I don't know what applications use it.

.3ds, .ma, and .mb are all proprietary Autodesk formats.

If you want something every 3d app can read, .fbx and .dae are your best bets, along with the LWO2 lightwave object file. (Lightwave used LWO3 now, but can still read LWO1/2 files).

As far as alternative compression formats, why not use .zip?  There are several 3rd party opensource libraries for such.

Link to comment
Share on other sites

On 12/31/2019 at 2:18 PM, Kryslin said:

You can choose whatever model file format you want;  you may have to go digging through someone's SDK for the information to write them correctly, though.

DAE and FBX both have text formats, and store rigging and animation data.  FBX does have the capability to store textures within the file.

Lightwave uses a pair of files : LWO and LWS to store models (LWO) and animation/rigging (LWS).  Images are stored external to the model.

Blender .blend files contain everything, I believe.

.obj only stores model information;  on the plus side, it's well understood, and in Ascii.  Daz3D and Poser store rigging information in a .cr2 file, which contains rigging, morphs, animation data, etc.

microsoft .x is well documented and may work well, but I don't know what applications use it.

.3ds, .ma, and .mb are all proprietary Autodesk formats.

If you want something every 3d app can read, .fbx and .dae are your best bets, along with the LWO2 lightwave object file. (Lightwave used LWO3 now, but can still read LWO1/2 files).

As far as alternative compression formats, why not use .zip?  There are several 3rd party opensource libraries for such.

I spent several months trying to write an exporter for .dae for Threejs. And it was a horrible pain to work with. XML is not a fun data type to begin with, but the way rigged meshes are handled completely through me off. In most cases it makes it a lot easier to start with something simple and then implement one aspect at a time. Start with a simple mesh, add materials, add textures, add a skeleton. The problem is that the skeleton is the root node and then the mesh is referenced from there. This makes it so you end up having to manage two drastically different use cases, with and without bones for making the node hierarchy and I didn't want to deal with that. Github

Also the format is needlessly flexible and needlessly complicated. To declare something like position, you need to declare the vertex list and give it a name, you need to declare that you're using XYZ floats and give that a name. And the whole approach of having to define lists, having to manage id's and having to reference, references of references for names was really stupid. And I also found that in best case scenarios even if you managed to get something to work (like embedded base64 textures), the file definition is so messed up and flexible that programs won't have the full spec implemented.

And that's when I figured that a 3d file format is basically a way of encoding vertices and faces that can be read by a program. So if I make a stupidly simple format, then I could write plugins for any program I need to work with and spend the time invested in trying to target .dae for learning how to work with different plugins. Which is probably more valuable information anyways. What I came up with the 'dash model format'. The idea is it's a simple-as-possible implementation of rigged meshes that uses fixed structures to eliminate guess work. The vertex list is a list of unique positions with bone indexes and weights. The faces are a list of three indexes for each triangle, material id and per-index uv and vertex color. Materials support a few common attributes like diffuse, and specular, textures are included as internal png files, bones are included as a world-position 4x4 array, and animations are included as a flat list of bone id, frame id, prs flags, and values for position, scale and rotation for each key frame.

The idea is that it's not designed to cover every use case, but more designed to be a simple representation of Dreamcast level graphics and easy to import, export and support with different applications. I've implemented import and export for Threejs and Noesis. And a developer from Germany implemented an import function for Unity. To really make the format viable, I need to implement a blender import/export plugin. But I find the Blender python API and documentation quite confusing, so I probably need to retard my way through that and ask a ton of retarded questions to see if I can get something working.

standards.png.5180717103603bac55392a8ab7075e50.png

Though thankfully everyone else in the world seems to hate .dae as much as I do (probably more), and the Khronos group has recently come out with the GLTF 2.0 specification, which is a JSON/binary type format which is meant to replace Collada (.dae) as the open standard. And adoption has been incredibly fast and complete. There are a few issues with the format that I think make working with the format directly pretty annoying, but it's been implemented in Threejs and in Blender. So if you can write an import plugin for a format (like .nj or .unj) in Threejs or Blender, you can then be able to export to .gltf, which can be reliably used with a lot of different 3d applications like Sketchfab, Godot.

For compression I was referring to Sega's internal compression for the NBL archive in Phantasy Star Portable. They're not using PRS compression, they're using something else. Which could be a slightly tweaked off-the-shelf algorithm, but it doesn't seem to want to decompress with the small array of compression library functions I have access to.

Link to comment
Share on other sites

1200165381_Screenshot_2020-01-03Agrajag-sama(3).thumb.png.9aeaddd7fddc393bed2d60c768b98dfd.png

 

I was expecting incomplete, backwards triangles, or holes in the model, but surprisingly the mesh seems to work. One issue that I need to keep track of is that I'm encoding the triangle strips as direct geometry. This means that instead of having a unique vertex list and then a list of indices, I'm creating a copy of every vertex everytime it's used. So while the booma above ends of having 2,154 vertices. This might seem a little wasteful, but for now it's better than the other two options. The other two options being:

1. Setting the mesh model to triangle strip. This would work but it it means that I wouldn't be able to combine draw calls for materials later as the triangle strips would bleed into each other. The booma has two materials, and at most it should probably have two or maybe three draw calls. With Segac's implementation of bone weight groups it has 15 draw calls. And if I used strip mode for the rendering the model, I would have to use the same number of draw calls. So I'm not using this option because it would prevent optimization later.

2. Creating a unique of vertices and indexing the faces. This is something that I should actually probably and was really tempted to implement it off the bat. But I'm not completely familiar with how the game handles vertices. Since the vertices are pre-arranged in a strip that means that vertices might be already be repeated in the strips they're in. I think that would mean that I have to first check for unique vertices in the strip, add those to the vertex list, and then work my way through that way. I'm also not familiar with how threejs treats indices. And index for a vertex is probably everything from vertex color, uv, weight, normals and position. Which means there might not even be too much of an optimzation, while it would be lighter. In any case once exported both gltf and dmf will create a unique list of indices and make a pretty small exported file, so it's only a matter of previewing the model in the viewport to be exported. Which means that unless there are some super huge models that take up too much memory, there isn't too much down side to the direct geometry approach.

If the image above is any indication, it's that bones are giving me the most trouble, which is something I wasn't exactly expecting. Bones are either a pre-encoded 4x4 matrix or a list of position, scale and rotations which are used to create a 4x4 matrix and sometimes need to be multiplied by the parent bone to get the right position. I think what I should do is first take notes of the values that need to be read for each bone. And then I have four options to try.

1. Apply from pos, rot, scale
2. Apply from pos, rot, scale and multiply by parent
3. Apply from 4x4 matrix
4. Apply from 4x4 matrix and multiply by parent

It seems the best way is to try each one of these one at a time and see what happens. If that doesn't work then it could be something weird like I need to multiply the rotation by the inverse of the parent or possibly that I need to start with the parent and apply the transformations into that. First we'll try the four different options listed above, take pictures to see how that works, and then go cry and rethink if it doesn't.

Link to comment
Share on other sites

Yeah, I had the same problems with .dae and .x;  XML is no fun to work with without an extensive library backing you up.  You are correct, GLTF 2.0 appears to be much nicer, and blender imports it nicely.

Re:  Bones

Use the same 4x4 transformation matrix for both the bones and mesh, it makes things easier.  Really, all the bone does is define a rotation center and the local z axis for transformation.

When computing the actual influence each bone has on the points, multiply the transformation by the bone's weight.

So, you take the parent's 4 x 4 matrix, and multiply it by the new transformation matrix, until you run out of things in your hierarchy,

Apply each transformation to the vertex list.  You can probably come up with more efficient weighs of doing things than making 16 passes through your vertex list.  Maybe link vertices to bones?

Once all the points are transformed, issue your draw call.  Ideally, you should have 1 draw call per material.

1) Get the vertices for the bone

2) Get it's transformation matrix

3) new vertex = (old vertex - center of rotation) * transformation matrix) + center of rotation.

4)When you've processed every bone, draw the thing.

 

Link to comment
Share on other sites

Okay, so option 1 turned out better than expected. Angles are kind of off, but surprisingly close.

2019210172_Screenshot_2020-01-03Agrajag-sama(4).thumb.png.d9d608ed1678449045d5ee0d88bf5421.png

1303693493_Screenshot_2020-01-03Agrajag-sama(5).png.823a110c139d897a918222173a3b723f.png

Using the provided transformation matrix directly has issues:

2118684159_Screenshot_2020-01-03Agrajag-sama(6).png.c5f276b41da61f88bfc38fc355f06922.png

Multiplying by the parent doesn't seem to work in either case. I'm going to try playing around with the compose approach for applying rot, scale and position and see how that goes.

 

Link to comment
Share on other sites

Creating the bones from the position, rotation and scale values seems to be the "right" approach. After some tweaking and testing I wasn't able to get the bones look any nicer, but I was able to get them to look at lot worse. So for now I think I'll stick with the approach I found for this image:

1541328670_Screenshot_2020-01-03Agrajag-sama(4).thumb.png.9f8b7c5f70556bcfca23175835a03f52.png

Depending on how they weight the model, this could actually be correct, so I'll have to test it when I get into animations to see if the bones are actually working properly. For now I threw in some random rotations on the bones to see if the mesh deformed along with the skeleton, and it looks like the mesh deformed. So that means weights are being applied.

2098428427_Screenshot_2020-01-03Agrajag-sama(7).png.825c02f11b29d49d665314533fa437df.png

Right now I'm generally happy with the geometry. The faces seem to be working for the mesh, the bones look reasonable, and weights are being applied. Next step is going to be to read the materials, parse the textures, set material per vertex group, and then lastly try to combine materials to reduce the number of draw calls. I think what I could do for testing is put the textures aside for a moment, assign a random color to each material (like yellow and red) and then assign material per vertex group, combine the vertex groups and then add in the textures last.

Link to comment
Share on other sites

Ironically, the way I generated the bones were to  add the position data from each node that had vertices in it, because that's how the .nj for mat stored them.  For animation data, there was some math involved getting the bones to point down the <0,0,1> vector.  Something I was going to implement in ExMLDNet was the last bone in each chain;  there were empty nodes in each chain at the end that did have position data attached to them (ideally, for characters, this was where a weapon model could be 'attached' to the model).  I was going to get the last position and add a bone.

I'll take a look at what you've posted earlier on the bone data (and my own notes on .nj/.njm), and see if I can't think of something of use.

(I've just got Visual Studio reloaded, I'll have to grab my old source code and dig back into it again...)

Link to comment
Share on other sites

17 minutes ago, Kryslin said:

Ironically, the way I generated the bones were to  add the position data from each node that had vertices in it, because that's how the .nj for mat stored them.  For animation data, there was some math involved getting the bones to point down the <0,0,1> vector.  Something I was going to implement in ExMLDNet was the last bone in each chain;  there were empty nodes in each chain at the end that did have position data attached to them (ideally, for characters, this was where a weapon model could be 'attached' to the model).  I was going to get the last position and add a bone.

I'll take a look at what you've posted earlier on the bone data (and my own notes on .nj/.njm), and see if I can't think of something of use.

(I've just got Visual Studio reloaded, I'll have to grab my old source code and dig back into it again...)

If you want to brush up on .nj/.njm you can check out the documentation here the quest editor / animation viewer here and the engine demo here.

Link to comment
Share on other sites

3 minutes ago, Kryslin said:

It looks like someone took apart ExMLDNet for most of it. 🙂  I can see one error which needs to be rectified;

Lightwave's rotation order is YXZ, not ZXY.

Some of it even looks like bits of my  code, too. 🙂

From the Katana SDK documentation:

1352839505_Screenshotfrom2020-01-0402-12-07.thumb.png.f7a0601b7ee5c9e57a753bfe1f3a8575.png

Link to comment
Share on other sites

507554135_Screenshot_2020-01-04Agrajag-sama.thumb.png.43cf764e4b83346a16488f945395cccd.png

Went ahead and implemented draw call grouping. Because of the way UNJ handles vertex weights as a group as opposed to each individual vertex it means that UNJ can manage up to four bone influences per draw (bad design). So the booma model has 2 materials, but 15 draw calls defined inside the model. To try and account for this bad design and convert it back to something more normal, I wrote a loop that takes all of the draw calls, compares the material id, and then checks to see if the start vertex matches of with the end vertex of the previous call, and if they are the same, group them together. The result is now we have 2 materials and 2 draw calls.

For the materials I went ahead and assigned random colors for each material to test the group merging aspect. Since the approach should generally be working we can go ahead and start reading and applying the actual color properties for the material, as well as look into rendering the textures to map to the mesh.

Link to comment
Share on other sites

Okay, I went back through my old source code (dear lord, what a mess), and figured out how I determined bone positions:

I would walk the hierarchy recursively, then when I hit something with no parent, I would start adding the positions together...

 Public Function GetBonePosition(ByVal N As Integer) As Types.XYZCoord
    Dim retv As New Types.XYZCoord
    If N = -1 Then
        retv = New Types.XYZCoord(0, 0, 0)
    Else
    	retv = GetBonePosition(Me.bn(N).Parent) + Me.bn(N).Position
    End If
	Return retv.Clone
End Function

Types.XYZCoord is a basic structure
	float X
	float Y
	float Z

It's part of a class definition I used, that also had basic operators defined for working on 3D vectors.
It's functionally the same as Vector3D.

My suggestion of using the entire transformation matrix was incorrect;  you'd only want to do this IF your bones are all <0,0,distance> / Oriented down the Z+ axis).

I'd have to dig a little further to figure out how I got the bone positions,

Link to comment
Share on other sites

Bones aside I took some time to read the materials and the textures. So we now have a textured booma. There are a lot of properties for both the material and the texture that I'm ignoring at the moment. Fortunately it looks like the booma model doesn't do anything fancy and is able to be displayed with default settings for the texture and material respectively. We'll see what properties need to be implemented as we test more models.

479758892_Screenshot_2020-01-05Agrajag-sama(1).png.56faaaee52c4a06f7cd8f136a23dc544.png

For bones, I'm not 100% sure that the T-pose I have is accurate, but overall the structure looks reasonable, and checking it against some of my other projects, they all managed to work with scale, rotation, position and add to parent. So I think the easiest way to test is going to be to apply and animation, and then use that as a point of reference to figure out how long we need to go cry in the corner. For the Dreamcast the animation files were labeled with the ".njm" file extension. Going with how Phantasy Star Portable adds 'u' on the front of everything, that probably means that we're looking for a ".unm". Expect we don't know exactly where an animation file is located. For the booma we have two nbl archives: "su_en_booma.nbl"  and "su_en_booma_tn.nbl".

And in those files we have:

su_en_booma.nbl:
BoomaTutor.bin
ActDataBooma.unr
AtkDatBooma.unr
DamageDataBooma.unr
MotTblBooma.unr
ParamBooma.unr
SeBooma.unr
TargetSelectBooma.unr
zStaticDataBooma.unr
en_xxx_1mbma.una
en_xxx_1mbma.unj

u_en_booma_tn.nbl:
en_xxx_1mbma.unt
en_1mbm.uvr
 
So i guess we'll need to go looking for an animation file to start analyzing it.
Link to comment
Share on other sites

Managed to find a animation file, so we can start retarding our way through it. Looks like since the game re-uses a lot of rigs that the animations aren't included in the same archive as the model, but in a collected shared animation archive. Specifically in the case of the booma, it looks like this archive is "su_en_humanoid_b_share.nbl".

1261136009_Screenshotfrom2020-01-0620-06-34.png.378137fa3ea399f9ac5ee1029df910b9.png

It looks like there are a lot of animations in here, but we should start with what looks the most obvious, so "en_xxx_walk_h_b.unm" looks like it has a high probability of having a normal walk animation. So if we're able to display something that resembles walking, then we're on the right track, otherwise things have gone horribly, horribly wrong and we need to go cry in the corner.

Now that we have a reasonably looking test file to work with, we can start the analysis. We first start off by dumping the hexidecimal contents of the file as an excel spreadsheet, specifically i use 'hexcel' listed on npm, written by someone who knows what they're doing a lot better than I do. Or maybe it could have been written by me, nobody really knows. And then a nice feature about the nbl archive, is that because it's designed to be copied straight into memory, the archive provides a list of all of the pointers in the data body that need to be updated. So we can go ahead and list all of those offsets here:

Pointer found at offset: 0xbc4 
Pointer found at offset: 0xbec 
Pointer found at offset: 0xc14 
Pointer found at offset: 0xc3c 
Pointer found at offset: 0xc64 
Pointer found at offset: 0xc8c 
Pointer found at offset: 0xcb4 
Pointer found at offset: 0xcdc 
Pointer found at offset: 0xd04 
Pointer found at offset: 0xd2c 
Pointer found at offset: 0xd54 
Pointer found at offset: 0xd7c 
Pointer found at offset: 0xda4 
Pointer found at offset: 0xdcc 
Pointer found at offset: 0xdf4 
Pointer found at offset: 0xe1c 
Pointer found at offset: 0xe44 
Pointer found at offset: 0xe6c 
Pointer found at offset: 0xe94 
Pointer found at offset: 0xebc 
Pointer found at offset: 0xee4 
Pointer found at offset: 0xef8 

 

Link to comment
Share on other sites

Starting at the top of the file, we see what we would expect to see which is the magic number followed the length of the file contents. Which is the whole length of the file, minus 8 (bytes) for the magic number. Following that looks like the offset to the file header.

1255441045_Screenshotfrom2020-01-0621-00-40.png.fdf5bde51b11a32c6d38fe63822e3956.png

One thing that I get confused about is this pointer to the header isn't included in the list of pointers. That would normally suggest that it's a 'seekRel' bytes from the point where it's read, which could either be 8 or 12 bytes off depending on how you look at it. But looking back at the .unj file, it looks like that was the specific address in the file relative to the start of the file. Which means that it's probably safe to say that the header starts at 0x0ee8 unless we trace through and find problems otherwise.

1678476446_Screenshotfrom2020-01-0621-05-37.png.76e092002cb5bdbd2f4cb40aeba07207.png

Now that we have an address to the header, we can jump down to it and find some values that are quite disturbing. We're going in raw, without much information to go on other than what we know from .njm. If there's something we can't figure out, then we may need to open up ppsspp, search for the animation pattern, edited it and do some testing (the horror), but for now we'll press on with what we can grasp from the context of the file. Right away we have the values 0x01 and 0x110, which look like either byte or short values.

We don't know exactly what these values are. We can refer back to .njm where the type of transformations were encoded in the type of the file. Position, rotation and scale all had one byte respectively and most of the time the file was position and rotation, which was encoded in most .njm files as 0x03 (for the bits of pos and rotation) followed by 0x02 (check value for two kinds of transform). In this case that could be what 0x01 and 0x110 are, but we have no way of knowing at the moment, there's always the chance we're slightly off from the start of the file.

We can jump ahead a little bit to 0x2042 and 0x0f41. These values are weird in that they're in the higher two bytes. Which means these are probably short values, but of what significance, we don't really know. The important numbers to look at are 0x15 and 0x0ba0. 0x15 looks like it's probably the number of bones (21), which is slightly weird since the number of bones included in the booma file is 0x16 (22). There is a chance that means that the root bone is not included for reasons I'll get into in the next paragraph.

The way PSO encoded animations was that pretty much all animations included position and rotation transformations. But they were applied in a specific way. In most, if not all cases, the root bone only contained position values, which showed how far an animation was intended to move that character for that set of frames. So animations like the sword attack, the distance the player moved forward and back in the z-direction for each attack was encoded in the animation. The rest of the bones after the root bone generally only ever had rotation animation. Which makes sense, if you think about you're own movement, it's probably caused by your knees or elbows bending (ie rotation), it's unlikely that your shoulder pops out of position, and if so is probably pretty painful. So we can make a quick guess that if the number of bones is recorded as less than the total, that we're likely looking at rotation animation.

As a side note, only one enemy used scale animation in pso, which is the dark belra with its respective claw attacks. The design team used scale to exaggerate the strength of the attack. As far as Phantasy Star Portable goes, I'm not sure if this was implemented, so in most cases we should probably expect only rotation, unless we encounter something that would suggest otherwise.

What we're looking at next is probably a list of 21 fixed-structs starting from 0x0ba0, that ends right where the header starts. And by doing so we can double check to see if we were right about the offset. We should also check to see if the offset 0x0ef8 is included in our list of pointer locations in the file to make sure we're calibrated in that respect. Okay, it's there, all I had to do was scroll up a bit to be safe.

Edited by kion
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...