I don't have my notes by me right now so this is just from my memory.
Lexical analysis is the process that ensures that all reserved words, identifiers, constants and operators are legitimate members of the language's lexicon.
Process:
1. Redundant characters are removed (ie: remarks, indentation, spaces etc).
2. Identifies each element and assigns each of them with a token. If the element cannot be identified the translator generates an error message.
Whilst I don't think that it is necessary to know much about the token table (It's not even mentioned in the excel book), I'll try to answer your question with the Sam Davis book. According to him, the table initially contains all the reserved words and operators that are included elements of the language. As the lexical analysis continues, identifiers and constants will be added to the token table.
So I gather that from his explanation, it seems that at the end of the lexical analysis, the table should contain:
1. All the legitimate elements of the programming language such as reserved words and operators (already present).
2. The actual constants and identifiers which the programmer has created whilst coding.
That is my overall understanding of the lexical analysis process, but I do not think that it is necessary to learn all of that in that much detail.