This table design could be used to reduce the size of a database and improve manageability that contains unknown/highly dynamic structures I will make an assumption (the only one in this post) and say this project deals with loading structures that change frequently. Could be wrong and be looking at a noob's 'work of art' There are three approaches for highly dynamic structural data: 1. Dynamically create the schema's based on external blue-prints, giving real names to columns. This might not be the best solution for a few subtle reasons which have to do with the degree of knowledge by the end-user's of the database and any planned api's or DAL's. Generating the dynamic schema prior to loading data can be done in two modes: 1. immediately, where new un-generated schemas are encountered and have to be generated on the spot. This presents an obvious problem if you have multiple clients performing the imports for a single database (similar to SETI). Such as, what if two clients encounter the same type of schema that isn't in the database yet, only one schema should be generated. 2. Delayed, where after loading many imports and discovering that some new files failed to match any current dynamic schema, the database user can manually kick off schema generation and then attempt a reload of those files that failed earlier. 2. Contain the all the data into a single table (or a table for each datatype) with four columns (ImportFileId, RowId[original structure's], ParameterName, ParameterValue). This solution gobbles up WAY MORE disk space (more than doubles the space requirement of the solution given above as well as in #1 (In oracle, at least, the remaining columns that are null take up no space.) This is considered serializing the data, also could be called demuxing. 3. Then there is the solution above. The only requirement is that any API must map the ImportFileId to some blue-print of the columns involved. This solution saves much more space than #2 AND has the added benefit of simplicity on the database up-keep side (ie. no schema generation prior to import) I've been down this road dealing with a database with over 400,000 actual unique fields of data. We actually used option #1 for good reason. So, I wouldn't knock it just because it's different.