This article discusses the challenges of using Kafka's tiered storage feature, introduced in version 3.9.0, which allows for improved data retention and cost management. It highlights two key issues faced when consuming data from remote storage: sequential remote fetches leading to increased latencies and the ability of fetches to exceed specified limits. Practical workarounds and future improvements in Kafka 4.2.0 are suggested to mitigate these problems, making it crucial for developers leveraging tiered storage to understand these implications.