doris stream load csv[INTERNAL_ERROR]too many filtered rows]导入报错

导入报错信息:
[root@pc01 ~]# curl --location-trusted -uroot: -T /tmp/datafiles/test_01.csv -H “format:csv_with_names” -H “trim_double_quotes:true” -H “column_separator:,” http://172.16.152.206:8030/api/test/ods_test01/_stream_load
{
“TxnId”: 6203,
“Label”: “a5d05904-26b6-423d-b05f-a140891be62b”,
“Comment”: “”,
“TwoPhaseCommit”: “false”,
“Status”: “Fail”,
“Message”: “[INTERNAL_ERROR]too many filtered rows”,
“NumberTotalRows”: 3,
“NumberLoadedRows”: 2,
“NumberFilteredRows”: 1,
“NumberUnselectedRows”: 0,
“LoadBytes”: 86,
“LoadTimeMs”: 18,
“BeginTxnTimeMs”: 1,
“StreamLoadPutTimeMs”: 5,
“ReadDataTimeMs”: 0,
“WriteDataTimeMs”: 7,
“CommitAndPublishTimeMs”: 0,
“ErrorURL”: “http://172.16.152.207:8040/api/_load_error_log?file=__shard_8/error_log_insert_stmt_574a7a02c4eaa1b0-e2ada45969919599_574a7a02c4eaa1b0_e2ada45969919599
}

ErrorURL内容:
Reason: actual column number in csv file is more than schema column number.actual number: 4, column separator: [,], line delimiter: [
], schema column number: 2; . src line [1000000601,“科技,大数据,Doris”
];

1.建表语句:
CREATE TABLE IF NOT EXISTS ods_test01
(
ProductID VARCHAR(100),
Keyword VARCHAR(256)
)
Unique KEY(ProductID)
DISTRIBUTED BY HASH(ProductID) BUCKETS 3
PROPERTIES (
“replication_allocation” = “tag.location.default: 3”
);
2.导入test_01.csv数据:
ProductID,Keyword
1000000601,“科技,大数据,Doris”
1000000602,科技
1000000603,

源文件的列和你表的列的数量对应不一致,你的表有两列,csv文件一行有四列

,“科技,大数据,Doris”

csv文件 内容中含有逗号的话,是需要用双引号括起来的,而实际生产上也是避免不了这种情况。
trim_double_quotes:true 应该是处理这种引号情况的,但是效果不理想,还有其他参数可以把引号中喊逗号的内容作为一整列数据吗

也就是列中含有分隔符是吧,你可以尝试使用一些不常见的字符作为行列分隔符