Jump to content
We're currently having a problem with our launcher. Please launch the game via the PSUC client in the game folder. Sorry for the inconvenience!


  • Content Count

  • Joined

  • Last visited

  • Days Won


kion last won the day on October 5

kion had the most liked content!

Community Reputation

15 Civilian

About kion

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Grabbing more files from the common models archive. Time to start looking into other archives for test cases. Two notes: nmll-09283.nbl is broken, and nmll-08904.nbl might be a good test candidate for the next set of vertex flag testing.
  2. Oh look a different model. Now that we are able to read archives and textures, the next step is to track down different vertex lists to track down which flags do what. The game seems to group each vertex is in the order of diffuse map uv, color, and position. And then they also seem to use 32 bit float and 16 bit ints to define values. So the flag defines what types of attributes are in the vertex list and what format they are in. So basically we find a model, debug the vertex list, find another model and debug again. So to start out we have the calorie mate item, which I figured would be a good place to start with the 16 bit values. We have 16 bit uv values and 16 vertices. I don't think I have the exact scale for the vertices and uv values, but I'm surprised with how close I got by plugging in likely values.
  3. We're back to something that resembles the first post with a few improvements. And the main improvements are that we're not unpacking and then stores but, but we can take archives and then read the files directly from there. Textures have also been implemented, and I fixed the vertex wind order for the faces. Next step is to test the meshes that are included in the basic archive and then start scale up from there.
  4. Small update. Textures have been added. Right now I'm throwing the textures somewhere (right now outlined in red), so check that the texture has been parsed correctly. I should probably add another window to the UI that can be used for viewing textures that have been loaded in for a model. To save on time, I might make a white box and stick it under the 3d viewport, and then move on to setting the textures to the map diffuse uv.
  5. Okay, so now that we can import assets and store them to be referenced later, the next step is to start working on parsing the files. So the way I have it now is that unj files are split into categories and then listed on the left. When a unj file is clicked, we can then load it and parse it. So I did some testing with nodejs as far as converting textures to pngs, and it looks like the texture format is pretty straight forward. So before we get into the models, we can take a second to make sure the textures are working, and that should help us later as far as identifying which assets are what for the models, which can help with debugging. So to be able to know which texture needs to be read for what model, we need to reference the texture list, which will tell us which textures need to be linked with what materials. And thankfully they re-used the NJTL ninja texture list format from the Dreamcast days. Possibility with a slight modification, but looking at the file, it's pretty easy to trace through. So first the magic number NUTL, followed by the length. Followed by the pointer to the header which is on 0x24 in this case. Then the header is the number of textures, followed by a pointer to the first texture struct, which is on 0x10. 0x10 and 0x14 are both zero. And this is because these values are use for attributes after the texture has been loaded into RAM, but not before. And that just leaves a pointer to the start of the texture which is 0x2c. So this is only for a .unt file with one texture. So to get the exact format of the header, we'll have to find a texture list that references more than one texture, but for now we have something that works. We can get the name of the texture, load it, parse it, and then have it ready to be paired with the model geometry when that gets loaded.
  6. Okay, so managing the file.fpb as an input might be an over-reach, but it looks like I can now access nbl files for uploading. Which means I can either figure out a better way to implement fpb without crashing, or take a list of nbl files as input, which would require running a script to per-separate them, but would otherwise work. For the viewer side of thing, to keep the list of models from being too long, i split the models into groups by the first two letters of the model name. So far it seems to work for the limited test cases I've been working with so far. We'll have to see if the approach holds up. Right now the model parsing side is still pretty limited. I guess I'll start with the simple drop models and work from there. Though before working on models I think I'll go ahead and implement textures to get those out of the way first. Because from what I've seen so far, the texture implementation is surprisingly straight forward.
  7. Okay, so now files are unpacked from the nbl archive. And now the question is what the best way to store and sort them is. I added a few properties to try and make it easier to find things like extension, basename, added a shared unique id for files in an nbl archive, and added an md5. So what I think i will do is set filename + md5 as the primary key. And then to find the models i can search for .unj on the extension. And since I'm only interested in models, what I might do is filter out files from the nmll section that don't match .unj, .unt, .unv, or .unm.
  8. Small update got blowfish ported to vanilla Javascript. So it might not be an exciting picture to see some filenames in a console. But it's pretty exciting to see something like this working with vanilla Javascript as there are a few quirks to work with these files in the browser. Since the browser can't access the file system directly, you need to have the user select the file and then read it with the file reader API to get the raw data as an array buffer. For blowfish specifically, it uses the properties of the uint32_t type which doesn't work the same as Javascript's built in variables. So to get around this you can declare a Uint32Array, which will then cause members of the array to act like a native programming language. And last the DataView API has been added, which allows for reading and writing different datatypes to an ArrayBuffer. And the result is being able to parse files directly in the browser. Right now I just have the header, next step will be to decompress and save the files. Edit: PRS is working too. Time to store the files to be used.
  9. Okay, so some random thoughts/updates. I tried to take the file.fpb file as an argument in the page, and the browser made it part of the way through and then promptly crashed. I don't think Javascript is suited for loading in a 829MB file, and then searching through the file for offsets, and then creating a new buffer for each offset. I might need to try something like asm.js so I can be more strict about how memory is handled (potentially)? Or I might need to use a script that will pre-process file.fpb to get it down to more manageable pieces. For the nbl archives, I have a bunch of questions for how to handle them. There are two kinds of nbl archives, with with 0x10 at offset 5 and one with 0x00 at offset 5. We're able to unpack the nbl archives with 0x00 at offset 0x05. I'm hoping that maybe Agrajag or JamRules can provide more information on this format. Though we can at least read the filenames from the NMLL header for both of these types, so right now I can write a function that takes an nbl archive as an input, reads the file header, checks to see if the archive contains a .unj model file, and discards the nbl file if none are found, otherwise we attempt to decrypt and decompress the NMLL and TMLL body and store the archive. The next thing I'm thinking about is grouping and how reliable the filenames are going to be. At first I was hoping that the developers created unique names for each asset, and that when names were re-used it was because the same asset was being cloned for a different scene. Though after seeing the number of duplicated filenames, it looks like they probably used the same name for different assets, but i'll have to do an md5 check to see if these are the same files or not. If the md5 sums match up, that means I can go ahead and extract everything out the the raw files, otherwise it might indicate that I'll have to group by nbl archive. I think what I should be focusing on right now is to continue to work with nbl archive 8894 which contains the drops. I can set up my function to take the nbl file as an argument, extract and display the filenames, and then parse the textures and display the model on click.
  10. This worked. I don't know how I managed to mess this up so bad before. Right now the game runs in full screen at 1024x768. Any follow ups on making the game run in windowed mode, or apply an offline patch?
  11. Getting started with parsing models from psp2i. Right now dealing with individual models, so the next step will be to take the file.fpb file as an input and locate the NMLL archives.
  • Create New...