Databases

History / Edit / PDF / EPUB / BIB /
Created: November 1, 2015 / Updated: December 9, 2019 / Status: in progress / 2 min read (~267 words)

  • Predefined schema (structured)
    • All rows have the exact same format (homogeneity)
  • Data is tightly packed together (locality)
  • Easy to go at a particular record index since all rows are the same length (uniformity)
  • System of index based either on hashing (unique keys) or B-trees (regular indexes, duplicates are allowed) to speed up search
  • System of foreign keys to ensure referential integrity (relate to data in a different structure)
  • Data can be written (insert/update/delete) or read (select)
  • Database normalization principles aim at reducing the amount of redundant data in order to prevent data desynchronization issues (data being different in 2 tables while they should be the same) as well as reducing values to their most atomical concept
  • Tables generally represent the entities to be modeled by the system

 well, my understanding of turing so far is that you can represent pretty much anything as a number
 except those non-computable numbers
 so every word can be represented as a number, phrase (order of words) as a number, documents as a number, thoughts as a number, etc.
 basically everything can be labelled
 then you can "easily" say A <-> B
 in the sense that the entity represented by A is related to the entity represented by B
 although I don't think that gets us very far

  • Universal data structure framework
  • Universal language for representing all these form of structure -> using graphs