MI. That sound spells disaster and the end of the universe.
Well, situation.
Noel wanted me to do a run using november data, solve any issues I find, creating a high level user interface and delivering the output to Jose for further processing. All this before the 26th of august.
What bugs me:
- All the macros in each model that need to be added the odbc open command.
- All the views and temp tables that are replicating the > 50 character issue into the system.
- The dts that rebuild all the tmp tables from scratch using Jose's tables.
- How to make planning detect that the data is ok
- Standarize the names of all logs, macros and batch files.
- Reducing the time Planning takes to rebuilt the model
- Reducing the time and amount of data Planning need to pull into the input cubes
- The entity 14 spreads, bringing those into a DTS to save time
- The bath file execution. To find a way the run in parallel.
- Make planning to be able to gave me some feedback I can attach to a higher level interface.
- Adding controls to each stage so the process can't continue unless data in the previous session is ok.
- Fixing that error I found in Operations the last run.
- Standarizing the source for all variables instead of changing them in each model.
- Creating a high level interface for defining the groups.
- Create the high level interface using planning or visual basic, whatever is best.
- The routines to create the output cubes.
- The dts for formatting Jose data.
- Making the MI folder self-contained. Even the logs
- Creating the User cases
- Creating the SIT document
- Creating the documentation
- Putting someone to test it
Well, so far that's what is in my head right now.
The succesfull outcome for this would be:
End user running the system from a nifty console showing him the status of every step in the process and opening at the end the pivot table with the results.
The Next action I can take:
Sorting the tasks for the less time consuming and doing them first.
No comments:
Post a Comment