My post header is going to the moon sorry for that 🙂
Last week I spent my time on GC but, if your data is in RS you have to unload your data from redshift to s3 cause of GC is not loaded data directly in redshift. So you have to use a bridge for that issue if your programming skills not enough in some scripting or oop languages you can do like below;
1 – Create a scheduled task for execution of below code;
unload (‘select * from test.bietltools where somefields is not null limit 100;’)
That query must be executed in Redshift, so if everything is fine on S3;
2 – Create a transfer task in Google Cloud Storage
Get your credentials and go to the cloud storage interface and create a transfer task in GCS from s3, fill text with your own credentials and bucket name etc.
You can find details on cloud storage page when you get your data from s3 to cloud storage.
3 – Create an external table in BigQuery
Create your Big Query table like an external table on BigQuery interface and create your first table with the schema or without the schema.
If you care about header in your file, no worry you can add the fields in big query table options side.
And now you are ready to query your data, enjoy querying 🙂