downloading folders from google drive.

I wanted to download some course material on RL shared by the author via Google drive using the command line.  I got a bunch of stuff using wget a folder in google drive was a challenge. I looked it up in SO which gave me a hint but no solution. I installed gdown using pip and then used: gdown --folder --continue https://drive.google.com/drive/folders/1V9jAShWpccLvByv5S1DuOzo6GVvzd4LV if there are more than 50 files you need to use --remaining-ok and only get the first 50. In such a case its best to download using the folder using the UI and decompress locally. Decompressing from the command line created errors related to unicode but using the mac UI I decompressed without a glitch.

SQL Dojo

TLDR:
Imagine just before your DS interview - you are NEO your coach is Morpheus, and you will be practice SQL in rapidly changing schemas.


Now here is a little project I thought up:

Despite any number of excellent SQL based projects I have created I tend to get rusty in SQL as I don't use it on a regular basis. I decided it might be worthwhile  to setup a virtual space to practice, hence the dojo.

The dojo lets a student practice analytical sql primarily queries analysts use.
Ultimately I'd like to to use it in an agile manner as an LMS with a minimal UI.  This would require creating a story for each query and a test that the query returns a good answer. Also to make things interesting the tasks should be related and proceed from easy to more challenging and cover a number of techniques like filtering, aggregation and subqueries.

However, initially I want to have things up and running quickly and to collect questions and answers that reflect how to do create views on a small number of databases from courses or books. Also this system can also be used to see how well things work on different dbms with a goal of doing things in a portable fashion.

I thought I might share some specifics. The POC features should be:
  1. Run server in docker - easy to install/restart/migrate (done)
  2. Agile access - e.g. using visual studio code + pluging. (done)
  3. Rich clients - MySQLWorkBench (done)
  4. SquirelSQL - supports more RDBS systems. (done)
  5. Access from Jupyter (done - but less agile)
Beyond the POC
  1. Migrate db to AWS (more & bigger databases).
  2. create a web interface to
    1. switch RDBMS
    2. log in
    3. enter and run queries
    4. show output log
    5. show query output. 
    6. store queries history
    7. keep score 
    8. indicate progress in units.
    9. feedback and discussion.
    10. allow users to add stories and queries.
    11. support non-sql dbs as well like
  3. Develop small learning units to practice techniques.
    1. [OK] basics
    2. [OK] filtering
    3. [OK] aggregation
    4. [OK] subqueries
    5. [] cleaning data & SQL wrangling
    6. [] OLAP
    7. [] design and ddl
    8. [OK] CRUD + stored procedures python.
    9. [] CRUD + stored procedures R.
    10. [] CRUD + stored procedures Java.
    11. [] transaction
      1. [] create queries for a bi dashboard.
      2. [] create queries for a marketing automation project.
    12. Migrate queries to database
    13. Show schema for the database.
    14. Make things secure.
    15. Isolated user.
    16. reset DB.
    17. Use serverless backends too
      1. aws athena.
      2. google bigquery.
    18. Use noSQL dbs - mongo, neo4,
    19. Connect to a dedicated environment like MySQLWorkBench 
    20. Connect to a BI environment or Tableau / Power BI.
    21. Use a freemium hosted database like bigquery.

    first snag:

    accessing mysql v>8.0 requires a new protocol. I had to re-enable the old one using some obscure command to allow user + password connection or change to the mysql.connector.connect connector instead

    TODO: find this snag it and record.
    TODO: add this hack to the mysql docker image.
    TODO: automate the docker image to run script to create and load data from a folder.
    TODO: add a docker image for postgres with equivalent capabilities.
    TODO: put the docker images @ AWS
    TODO: get a docker image with MySQL sample database as it is used in many tutorials.
    TODO: migrate project to trello.

    Updates:
    • I installed Squirrel SQL to access multiple dbs via rich client.
    • I installed GraalVM to do polyglot data science in a notebook.
    • I created a jupyter notebook to access mysql database. 
      • This is good for accessing a local database.
    • I plan to update this to practicing Polyglot data wrangling. i.e. get data from db into R and Python data frames and do some quick explorations.
    • Spidering & Indexing.

    Spinoff DOJOs


    ETL 
    ELK stack 
    DB  
    BIG DATA

    Comments

    Popular posts from this blog

    Moodle <=< Mediawiki SUL integration - first thoughts

    downloading folders from google drive.

    Insight into progressive web apps