Skip to content

Releases: thatInfrastructureGuy/sqltocsvgzip

Bug Fix: Nil pointer exception when `WriteHeaders` is set to fal

01 Feb 06:37
Compare
Choose a tag to compare
If writeHeaders set to false, dont write to csv buffer.

Signed-off-by: thatInfrastructureGuy <[email protected]>

Return Row Count.

19 Jan 17:11
Compare
Choose a tag to compare
  • Return number of rows processed
  • Use env var LOG_LEVEL. Defaults to "INFO"

Upgrade Dependencies

17 Jan 23:01
Compare
Choose a tag to compare
  • Upgrade pgzip and aws-sdk-go

Use 4 Upload Threads by Default

17 Jan 22:02
7df1e5e
Compare
Choose a tag to compare
  • Updated documentation.
  • Use static upload threads for consistent memory usage.

Remove Sql Batch buffer.

16 Jan 18:48
6f0c0d3
Compare
Choose a tag to compare
  • Remove sql batch buffer.
  • Preset initial starting value for csvBuffer. See #8

Memory Optimization

16 Jan 18:14
5ce4eab
Compare
Choose a tag to compare

Reduce fmt.Sprintf calls whereever possible #7

Abort S3 upload when SIGTERM/SIGINT is received.

22 Jul 05:52
e3baa04
Compare
Choose a tag to compare
  • Takes care of cleaning up unfinished uploads.
  • S3 costs money even if upload is unsuccessful. Hence, we need to abort the uploads explicitly if program encounters error.

Introduced UploadConfig() and WriteConfig()

18 Jul 23:40
696bc26
Compare
Choose a tag to compare
  • UploadConfig() replaces DefaultConfig()
  • WriteConfig() replace New()

Major bug fix: Corrupted Gzip on S3 upload

18 Jul 21:42
c6ae3a7
Compare
Choose a tag to compare
  • Fixed a bug where s3 upload would corrupt gzip.
  • Reduced memory footprint.
  • Disabled some debug logs.
  • Adjusted Defaults.
  • Added documentation and comments.

Upload directly from sql to s3

16 Jul 01:06
e0bfc75
Compare
Choose a tag to compare

Features

  • Consistent memory, cpu and network usage whether your database has 1 Million or 1 Trillion records.
  • Multipart upload if file size > 5 Mb
  • Buffer size is set to 5 Mb by default.
  • No need for writable disk as we directly upload to s3.
  • Log levels set: Error, Warn, Info,Debug, Verbose
  • Various bug fixes.
  • Breaking API changes except for WriteFile(file, rows) and Write(w io.writer) methods.
  • New methods: UploadToS3(rows) and Upload() to upload to s3

Caveats

  • Maximum of 10000 part uploads are allowed by AWS. Hence, (5Mb x 10000) 50Gb of gzipped data is supported by default settings.
  • Increase buffer size if you want to reduce parts or have more than 50Gb of gzipped data.
  • Currently only supports upload to AWS S3 API compatible storage.