downloading folders from google drive.

I wanted to download some course material on RL shared by the author via Google drive using the command line.  I got a bunch of stuff using wget a folder in google drive was a challenge. I looked it up in SO which gave me a hint but no solution. I installed gdown using pip and then used: gdown --folder --continue if there are more than 50 files you need to use --remaining-ok and only get the first 50. In such a case its best to download using the folder using the UI and decompress locally. Decompressing from the command line created errors related to unicode but using the mac UI I decompressed without a glitch.

Tips on working with SQL Fiddle

C-3PO vs. Data (137/365)
C-3PO vs. Data (137/365)
(Photo credit: JD Hancock)
Trying to get help with SQL ?  Chances are that you could use the open-source web application  mysqlfiddle. 
Usually questions asked on the web leave much to be desired in terms of precision. The good news is that with Sqlfiddle you can set up a copy of the relevant sql schema, add some sample data,  add your best attempt and and share these with the people you are chatting or corresponding with.
Next follows an example environment I set up to explore on collecting editor sessionstatistics for wikipedia's undocumented revision_user table. Working with a full environment was not practical. Working with the sqlfiddle actually ended up as a big time saver since testing the queries on a live database would have taken hours.!2/d2f02/22

It sometimes seems that setting up a Database environment is a bit of an overkill - especially when it is just to ask a question. However if you have a working fiddle it is possible you can play with less data and get your results much faster. Here are a couple of tips for accelerating the process.

Some Tips:

Get the table definition

using the command bellow you can may be able to get the schema directly:

SHOW CREATE TABLE enwiki_p.revision_userindex;

Describe the table

If you are working with a view it might not be the best option since the can contain lots of constraints. In this case you might want to use:

Describe revision_userindex;

which will give more relevant information

Inserting data

The next step would be to put some data into the schema. Getting it from the database is as easy as :

Select * from revision_userindex limit 1

You would probably want to custom make your data set since this will actually help you to intuit the solution to your problems. You probably don't need to specify the field names so you will end up with a bunch of insert statements based on the select - looking something like:

INSERT INTO revision_userindex VALUES (564075732, 3512750, 0, 'comment.', 7328338, 'Su', 20130713010000, 0, 0, 78, 563922235, '6owsp0kafbpvzs1gfivh4227ln2hpzu');

Sharing your fiddles:

In case you are unfamiliar with jsfiddle or github's gist you may not realise that you just need to give the urls. Also note that as you change thins around the URL will increment - so send the right version.

Some Show-stoppers

Setting the correct version

One issue that came up was that my schema was not working fine because I had not set the correct version of mysql. I exited and relaunched to get the correct version and updated the sqlfiddle to reflect it which cleared up the issue.

Time stamps and dates

Since Sqlfiddle is powered by JavaScript you may end up with issues related to dates. There is a link to hidden in the schema creation dialogue. I changed my TIMSTAMP[14] field to VarBinary[14] and it began to display properly.

Getting & providing support

So if you are tired of forums you might want to hung out on IRC at the freenode network with the SQL smarties on #sql
Enhanced by Zemanta


Popular posts from this blog

Moodle <=< Mediawiki SUL integration - first thoughts

downloading folders from google drive.

Insight into progressive web apps