You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like it if the from_line option could ignore comment lines when determining the line number.
Motivation
I receive a CSV structured like this:
// Some
// Number of
// Comments
thing1=value1
thing2=value2
header1,header2,header3
value,value,value
...
Where there's some arbitrary number of comment lines at the top, followed by exactly 2 lines of unrelated data that I need to skip before the actual csv content begins. The exact starting line can vary.
Alternative
I'm fetching the csv using got.stream and piping it into parse, then processing rows using for await. I might be able to add some middleware to filter out the unwanted rows before it goes to the parser but I'm not sure the best way to approach that.
Draft
Ideally I would like to be able to use {comment: '//', from_line: 3} to skip over all comments as well as the 2 unrelated lines. I.e. it would start parsing from 3rd non-comment line.
The text was updated successfully, but these errors were encountered:
Summary
I would like it if the
from_line
option could ignore comment lines when determining the line number.Motivation
I receive a CSV structured like this:
Where there's some arbitrary number of comment lines at the top, followed by exactly 2 lines of unrelated data that I need to skip before the actual csv content begins. The exact starting line can vary.
Alternative
I'm fetching the csv using
got.stream
and piping it intoparse
, then processing rows usingfor await
. I might be able to add some middleware to filter out the unwanted rows before it goes to the parser but I'm not sure the best way to approach that.Draft
Ideally I would like to be able to use
{comment: '//', from_line: 3}
to skip over all comments as well as the 2 unrelated lines. I.e. it would start parsing from 3rd non-comment line.The text was updated successfully, but these errors were encountered: