![]() at .(SegmentLoaderLocalCacheManager.java:225) ~ at .(SegmentLoaderLocalCacheManager.java:253) ~ at .(SegmentLoaderLocalCacheManager.java:292) ~ at .(SegmentLoaderLocalCacheManager.java:304) ~ at .s3.S3LoadSpec.loadSegment(S3LoadSpec.java:61) ~ at .s3.S3DataSegmentPuller.getSegmentFiles(S3DataSegmentPuller.java:135) ~ T21:34:09,930 ERROR .loading.SegmentLoaderLocalCacheManager - Failed to load segment in current location, try next location if any: T21:34:09,929 DEBUG - x-amzn-RequestId: not available T21:34:09,929 DEBUG - Received successful response: 200, AWS Request ID: cf9e800baf6a3ebc zip segment yet) that returned 200: T21:34:09,929 TRACE - Done parsing service response XML I changed the cluster logging to TRACE but still no good information went to the log, but I saw that there’s some requests before the download (I don’t think it’s the. The historical fails with null point exception, API) - #2 by Gian_Merlinoĭruid.s3.endpoint.url=s3.ĭruid.s3.endpoint.signingRegion=us-west-004ĭ3Prefix=druid/indexing-logsĪnd submit this simple job, the file is correctly stored on the bucket, but failed to be available/queryable, because the historical cannot download the file (the coordinator keep retrying forever, tho, and the “data” is not lost because it’s intelligent enough to keep the data on the indexer while the historical is not ready) "type": "index_parallel", ![]() I’m trying for a few hours to deploy druid on k8s using Backblaze S3 as a deep-storage, but I was facing a few errors, I thought it was related to not using ZK initially, thanks Himanshu Gupta for not needing yet another ZK on the cluster, but I switched to ZK temporarily and the behavior continued.Īfter reading the docs, I set =true (assuming, if I’m understanding the phasing correctly, this would disable ACL), this is also configured in this way on this 2018 post Deep storage on Oracle Cloud (S3 compat.
0 Comments
Leave a Reply. |