Skip to content

Core: Interface based DataFile reader and writer API#12298

Open
pvary wants to merge 6 commits intoapache:mainfrom
pvary:file_Format_api_without_base
Open

Core: Interface based DataFile reader and writer API#12298
pvary wants to merge 6 commits intoapache:mainfrom
pvary:file_Format_api_without_base

Conversation

@pvary
Copy link
Contributor

@pvary pvary commented Feb 17, 2025

Here is what the PR does:

  • Created 3 interface classes which are implemented by the file formats:
    • ReadBuilder - Builder for reading data from data files
    • AppenderBuilder - Builder for writing data to data files
    • ObjectModel - Providing ReadBuilders, and AppenderBuilders for the specific data file format and object model pair
  • Updated the Parquet, Avro, ORC implementation for this interfaces, and deprecated the old reader/writer APIs
  • Created interface classes which will be used by the actual readers/writers of the data files:
    • AppenderBuilder - Builder for writing a file
    • DataWriterBuilder - Builder for generating a data file
    • PositionDeleteWriterBuilder - Builder for generating a position delete file
    • EqualityDeleteWriterBuilder - Builder for generating an equality delete file
    • No ReadBuilder here - the file format reader builder is reused
  • Created a WriterBuilder class which implements the interfaces above (AppenderBuilder/DataWriterBuilder/PositionDeleteWriterBuilder/EqualityDeleteWriterBuilder) based on a provided file format specific AppenderBuilder
  • Created an ObjectModelRegistry which stores the available ObjectModels, and engines and users could request the readers (ReadBuilder) and writers (AppenderBuilder/DataWriterBuilder/PositionDeleteWriterBuilder/EqualityDeleteWriterBuilder) from.
  • Created the appropriate ObjectModels:
    • GenericObjectModels - for reading and writing Iceberg Records
    • SparkObjectModels - for reading (vectorized and non-vectorized) and writing Spark InternalRow/ColumnarBatch objects
    • FlinkObjectModels - for reading and writing Flink RowData objects
    • An arrow object model is also registered for vectorized reads of Parquet files into Arrow ColumnarBatch objects
  • Updated the production code where the reading and writing happens to use the ObjectModelRegistry and the new reader/writer interfaces to access data files
  • Kept the testing code intact to ensure that the new API/code is not breaking anything

@pvary pvary force-pushed the file_Format_api_without_base branch 2 times, most recently from c528a52 to 9975b4f Compare February 20, 2025 09:45
@pvary pvary changed the title WIP: Interface based FileFormat API WIP: Interface based DataFile reader and writer API Feb 20, 2025
Copy link
Contributor

@liurenjie1024 liurenjie1024 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @pvary for this proposal, I left some comments.

@pvary
Copy link
Contributor Author

pvary commented Feb 21, 2025

I will start to collect the differences here between the different writer types (appender/dataWriter/equalityDeleteWriter/positionalDeleteWriter) for reference:

  • Writer context is different between delete and data files. This contains TableProperties/Configurations which could be different between delete and data files. For example for parquet: RowGroupSize/PageSize/PageRowLimit/DictSize/Compression etc. For ORC and Avro we have some similar changing configs
  • Specific writer functions for position deletes to write out the PositionDelete records
  • Positional delete PathTransformFunction to convert writer data type for the path to file format data type

@rdblue
Copy link
Contributor

rdblue commented Feb 22, 2025

While I think the goal here is a good one, the implementation looks too complex to be workable in its current form.

The primary issue that we currently have is adapting object models (like Iceber's internal StructLike, Spark's InternalRow, or Flink's RowData) to file formats so that you can separately write object model to format glue code and have it work throughout support for an engine. I think a diff from the InternalData PR demonstrates it pretty well:

-    switch (format) {
-      case AVRO:
-        AvroIterable<ManifestEntry<F>> reader =
-            Avro.read(file)
-                .project(ManifestEntry.wrapFileSchema(Types.StructType.of(fields)))
-                .createResolvingReader(this::newReader)
-                .reuseContainers()
-                .build();
+    CloseableIterable<ManifestEntry<F>> reader =
+        InternalData.read(format, file)
+            .project(ManifestEntry.wrapFileSchema(Types.StructType.of(fields)))
+            .reuseContainers()
+            .build();
 
-        addCloseable(reader);
+    addCloseable(reader);
 
-        return CloseableIterable.transform(reader, inheritableMetadata::apply);
+    return CloseableIterable.transform(reader, inheritableMetadata::apply);
-
-      default:
-        throw new UnsupportedOperationException("Invalid format for manifest file: " + format);
-    }

This shows:

  • Rather than a switch, the format is passed to create the builder
  • There is no longer a callback passed to create readers for the object model (createResolvingReader)

In this PR, there are a lot of other changes as well. I'm looking at one of the simpler Spark cases in the row reader.

The builder is initialized from DataFileServiceRegistry and now requires a format, class name, file, projection, and constant map:

    return DataFileServiceRegistry.readerBuilder(
            format, InternalRow.class.getName(), file, projection, idToConstant)

There are also new static classes in the file. Each creates a new service and each service creates the builder and object model:

  public static class AvroReaderService implements DataFileServiceRegistry.ReaderService {
    @Override
    public DataFileServiceRegistry.Key key() {
      return new DataFileServiceRegistry.Key(FileFormat.AVRO, InternalRow.class.getName());
    }

    @Override
    public ReaderBuilder builder(
        InputFile inputFile,
        Schema readSchema,
        Map<Integer, ?> idToConstant,
        DeleteFilter<?> deleteFilter) {
      return Avro.read(inputFile)
          .project(readSchema)
          .createResolvingReader(schema -> SparkPlannedAvroReader.create(schema, idToConstant));
    }

The createResolvingReader line is still there, just moved into its own service class instead of in branches of a switch statement.

In addition, there are now a lot more abstractions:

  • A builder for creating an appender for a file format
  • A builder for creating a data file writer for a file format
  • A builder for creating an equality delete writer for a file format
  • A builder for creating a position delete writer for a file format
  • A builder for creating a reader for a file format
  • A "service" registry (what is a service?)
  • A "key"
  • A writer service
  • A reader service

I think that the next steps are to focus on making this a lot simpler, and there are some good ways to do that:

  • Focus on removing boilerplate and hiding the internals. For instance, Key, if needed, should be an internal abstraction and not complexity that is exposed to callers
  • The format-specific data and delete file builders typically wrap an appender builder. Is there a way to handle just the reader builder and appender builder?
  • Is the extra "service" abstraction helpful?
  • Remove ServiceLoader and use a simpler solution. I think that formats could simply register themselves like we do for InternalData. I think it would be fine to have a trade-off that Iceberg ships with a list of known formats that can be loaded, and if you want to replace that list it's at your own risk.
  • Standardize more across the builders for FileFormat. How idToConstant is handled is a good example. That should be passed to the builder instead of making the whole API more complicated. Projection is the same.

@pvary
Copy link
Contributor Author

pvary commented Feb 24, 2025

While I think the goal here is a good one, the implementation looks too complex to be workable in its current form.

I'm happy that we agree with the goals. I created a PR to start the conversation. If there are willing reviewers we can introduce more invasive changes to archive a better API. I'm all for it!

The primary issue that we currently have is adapting object models (like Iceber's internal StructLike, Spark's InternalRow, or Flink's RowData) to file formats so that you can separately write object model to format glue code and have it work throughout support for an engine.

I think we need to keep this direct transformations to prevent the performance loss which would be caused by multiple transformations between object model -> common model -> file format.

We have a matrix of transformation which we need to encode somewhere:

Source Target
Parquet StructLike
Parquet InternalRow
Parquet RowData
Parquet Arrow
Avro ...
ORC ...

[..]

  • Rather than a switch, the format is passed to create the builder
  • There is no longer a callback passed to create readers for the object model (createResolvingReader)

The InternalData reader has one advantage over the data file readers/writers. The internal object model is static for these readers/writers. For the DataFile readers/writers we have multiple object models to handle.

[..]
I think that the next steps are to focus on making this a lot simpler, and there are some good ways to do that:

  • Focus on removing boilerplate and hiding the internals. For instance, Key, if needed, should be an internal abstraction and not complexity that is exposed to callers

If we allow adding new builders for the file formats we can remove a good chunk of the boilerplate code. Let me see how this would look like

  • The format-specific data and delete file builders typically wrap an appender builder. Is there a way to handle just the reader builder and appender builder?

We need to refactor the Avro positional delete write for this, or add a positionalWriterFunc. Also need to consider that the format specific configurations which are different for the appenders and the delete files (DELETE_PARQUET_ROW_GROUP_SIZE_BYTES vs. PARQUET_ROW_GROUP_SIZE_BYTES)

  • Is the extra "service" abstraction helpful?

If we are ok with having a new Builder for the readers/writers, then we don't need the service. It was needed to keep the current APIs and the new APIs compatible.

  • Remove ServiceLoader and use a simpler solution. I think that formats could simply register themselves like we do for InternalData. I think it would be fine to have a trade-off that Iceberg ships with a list of known formats that can be loaded, and if you want to replace that list it's at your own risk.

Will do

  • Standardize more across the builders for FileFormat. How idToConstant is handled is a good example. That should be passed to the builder instead of making the whole API more complicated. Projection is the same.

Will see what could be arcived

@pvary pvary force-pushed the file_Format_api_without_base branch 5 times, most recently from c488d32 to 71ec538 Compare February 25, 2025 16:53

private FormatModelRegistry() {}

private static class FileWriterBuilderImpl<W extends FileWriter<?, ?>, D, S>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that the type params are quite right here. The row type of FileWriter should be D, right? That means that this should probably be FileWriterBuilderImpl<D, S, W extends FileWriter<D, ?> right? And it seems suspicious that we aren't correctly carrying through the R param of FileWriter, too. This could probably be parameterized by R since it is determined by the returned writer type.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left it like this, because it needs some ugly casting magic on the registry side:

    FormatModel<PositionDelete<D>, ?> model =
        (FormatModel<PositionDelete<D>, ?>) (FormatModel) modelFor(format, PositionDelete.class);

Updated the code based on your recommendation. Check if you like it this way better, or not.

@pvary pvary force-pushed the file_Format_api_without_base branch from fc5a2f2 to 8a8a67e Compare January 31, 2026 10:50
// Spark eagerly consumes the batches. So the underlying memory allocated could be
// reused without worrying about subsequent reads clobbering over each other. This
// improves read performance as every batch read doesn't have to pay the cost of
// allocating memory.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Did this need to be reformatted? It's less of a problem if there aren't substantive changes mixed together with reformatting.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reformatted to comply with line-length restrictions. The increased indentation required the comment the comment to be reformatted.

@pvary pvary force-pushed the file_Format_api_without_base branch from cecf8c3 to bec9b38 Compare February 3, 2026 10:54
* @param outputFile destination for the written data
* @return a configured delete write builder for creating a {@link PositionDeleteWriter}
*/
@SuppressWarnings({"unchecked", "rawtypes"})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this needs all of the casts and rawtypes. This works for me:

  @SuppressWarnings("unchecked")
  public static <D> FileWriterBuilder<PositionDeleteWriter<D>, ?> positionDeleteWriteBuilder(
      FileFormat format, EncryptedOutputFile outputFile) {
    FormatModel<PositionDelete<D>, ?> model = FormatModelRegistry.modelFor(format, PositionDelete.class);
    return FileWriterBuilderImpl.forPositionDelete(model, outputFile);
  }

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very strange, because the IntelliJ doesn't report the compilation error, but when compiling from command line we get this:

> Task :iceberg-core:compileJava
/Users/petervary/dev/iceberg/core/src/main/java/org/apache/iceberg/formats/FormatModelRegistry.java:182: error: incompatible types: cannot infer type-variable(s) D#1,S
    FormatModel<PositionDelete<D>, ?> model = modelFor(format, PositionDelete.class);
                                                      ^
    (argument mismatch; Class<PositionDelete> cannot be converted to Class<? extends PositionDelete<D#2>>)
  where D#1,S,D#2 are type-variables:
    D#1 extends Object declared in method <D#1,S>modelFor(FileFormat,Class<? extends D#1>)
    S extends Object declared in method <D#1,S>modelFor(FileFormat,Class<? extends D#1>)
    D#2 extends Object declared in method <D#2>positionDeleteWriteBuilder(FileFormat,EncryptedOutputFile)
Note: Some input files use or override a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 error

Basically the raw FormatModel<PositionDelete, ?> cannot be converted to FormatModel<PositionDelete<D>, ?>

Alternatively we can do something like this (equally ugly):

    FormatModel<PositionDelete<D>, ?> model =
        modelFor(format, (Class<PositionDelete<D>>) (Class) PositionDelete.class);

"Equality field ids not supported for this writer type");
}

ModelWriteBuilder<D, S> modelWriteBuilder() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason to use package-private rather than protected? Looks like these are intended for use in the private subclasses. I think this is the more restrictive option?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to the Java documentation (https://docs.oracle.com/javase/tutorial/java/javaOO/accesscontrol.html), package‑private access is more restrictive than protected. Our current Checkstyle rules require accessor methods even for protected, package‑private, so regardless of which visibility we choose, we still need accessor methods.

keyMetadata());
}

private static class PositionDeleteFileAppender<T> implements FileAppender<StructLike> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because of this and some of the suppressions (like "rawtypes"), I took a deeper look into the type params in this class.

I was able to avoid needing this class by updating PositionDeleteWriter so that its FileAppender is parameterized by PositionDelete<T> rather than StructLike (which is a parent of PositionDelete). Here's the diff:

diff --git a/core/src/main/java/org/apache/iceberg/deletes/PositionDeleteWriter.java b/core/src/main/java/org/apache/iceberg/deletes/PositionDeleteWriter.java
index a8af5e9d0f..6fcd772d59 100644
--- a/core/src/main/java/org/apache/iceberg/deletes/PositionDeleteWriter.java
+++ b/core/src/main/java/org/apache/iceberg/deletes/PositionDeleteWriter.java
@@ -51,7 +51,7 @@ public class PositionDeleteWriter<T> implements FileWriter<PositionDelete<T>, De
   private static final Set<Integer> FILE_AND_POS_FIELD_IDS =
       ImmutableSet.of(DELETE_FILE_PATH.fieldId(), DELETE_FILE_POS.fieldId());
 
-  private final FileAppender<StructLike> appender;
+  private final FileAppender<PositionDelete<T>> appender;
   private final FileFormat format;
   private final String location;
   private final PartitionSpec spec;
@@ -61,7 +61,7 @@ public class PositionDeleteWriter<T> implements FileWriter<PositionDelete<T>, De
   private DeleteFile deleteFile = null;
 
   public PositionDeleteWriter(
-      FileAppender<StructLike> appender,
+      FileAppender<PositionDelete<T>> appender,
       FileFormat format,
       String location,
       PartitionSpec spec,

Also, I looked into consolidating as much as possible into the parent class and I think that validations are cleaner if they are put in a validate method on the parent. However, there was an issue with the type param D for PositionDeleteWriteBuilder, where PositionDeleteWriter needs to be constructed in the PositionDeleteWriterBuilder class because D in FileWriterBuilderImpl is D=PositionDelete<T> and there is no way to identify T in the parent class. That made me keep this model of implementing the build methods in the child classes.

I also think it's less code to use protected instance fields rather than the getter methods. It seems slightly cleaner, but I'm fine if you don't like the change and want to keep the getters. Here's the diff for the other changes:

diff --git a/core/src/main/java/org/apache/iceberg/formats/FileWriterBuilderImpl.java b/core/src/main/java/org/apache/iceberg/formats/FileWriterBuilderImpl.java
index 85c7464069..e79161bd7c 100644
--- a/core/src/main/java/org/apache/iceberg/formats/FileWriterBuilderImpl.java
+++ b/core/src/main/java/org/apache/iceberg/formats/FileWriterBuilderImpl.java
@@ -20,13 +20,11 @@ package org.apache.iceberg.formats;
 
 import java.io.IOException;
 import java.nio.ByteBuffer;
-import java.util.List;
 import java.util.Objects;
 import java.util.stream.Collectors;
 import java.util.stream.IntStream;
 import org.apache.iceberg.FileContent;
 import org.apache.iceberg.FileFormat;
-import org.apache.iceberg.Metrics;
 import org.apache.iceberg.MetricsConfig;
 import org.apache.iceberg.PartitionSpec;
 import org.apache.iceberg.Schema;
@@ -38,21 +36,11 @@ import org.apache.iceberg.deletes.PositionDeleteWriter;
 import org.apache.iceberg.encryption.EncryptedOutputFile;
 import org.apache.iceberg.encryption.EncryptionKeyMetadata;
 import org.apache.iceberg.io.DataWriter;
-import org.apache.iceberg.io.FileAppender;
 import org.apache.iceberg.io.FileWriter;
 import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
 
 abstract class FileWriterBuilderImpl<W extends FileWriter<D, ?>, D, S>
     implements FileWriterBuilder<W, S> {
-  private final ModelWriteBuilder<D, S> modelWriteBuilder;
-  private final String location;
-  private final FileFormat format;
-  private Schema schema = null;
-  private PartitionSpec spec = null;
-  private StructLike partition = null;
-  private EncryptionKeyMetadata keyMetadata = null;
-  private SortOrder sortOrder = null;
-
   /** Creates a builder for {@link DataWriter} instances for writing data files. */
   static <D, S> FileWriterBuilder<DataWriter<D>, S> forDataFile(
       FormatModel<D, S> model, EncryptedOutputFile outputFile) {
@@ -75,8 +63,20 @@ abstract class FileWriterBuilderImpl<W extends FileWriter<D, ?>, D, S>
     return new PositionDeleteWriterBuilder<>(model, outputFile);
   }
 
+  private final FileContent content;
+  protected final ModelWriteBuilder<D, S> modelWriteBuilder;
+  protected final String location;
+  protected final FileFormat format;
+  protected Schema schema = null;
+  protected PartitionSpec spec = null;
+  protected StructLike partition = null;
+  protected EncryptionKeyMetadata keyMetadata = null;
+  protected SortOrder sortOrder = null;
+  protected int[] equalityFieldIds = null;
+
   private FileWriterBuilderImpl(
       FormatModel<D, S> model, EncryptedOutputFile outputFile, FileContent content) {
+    this.content = content;
     this.modelWriteBuilder = model.writeBuilder(outputFile).content(content);
     this.location = outputFile.encryptingOutputFile().location();
     this.format = model.format();
@@ -157,40 +157,27 @@ abstract class FileWriterBuilderImpl<W extends FileWriter<D, ?>, D, S>
 
   @Override
   public FileWriterBuilderImpl<W, D, S> equalityFieldIds(int... fieldIds) {
-    throw new UnsupportedOperationException(
-        "Equality field ids not supported for this writer type");
-  }
-
-  ModelWriteBuilder<D, S> modelWriteBuilder() {
-    return modelWriteBuilder;
-  }
-
-  String location() {
-    return location;
-  }
-
-  FileFormat format() {
-    return format;
-  }
-
-  Schema schema() {
-    return schema;
-  }
-
-  PartitionSpec spec() {
-    return spec;
-  }
+    if (content != FileContent.EQUALITY_DELETES) {
+      throw new UnsupportedOperationException(
+          "Equality field ids not supported for this writer type");
+    }
 
-  StructLike partition() {
-    return partition;
-  }
+    this.equalityFieldIds = fieldIds;
 
-  EncryptionKeyMetadata keyMetadata() {
-    return keyMetadata;
+    return this;
   }
 
-  SortOrder sortOrder() {
-    return sortOrder;
+  protected void validate() {
+    Preconditions.checkState(
+        content != FileContent.EQUALITY_DELETES || equalityFieldIds != null,
+        "Invalid delete field ids for equality delete writer: null");
+    Preconditions.checkState(
+        content == FileContent.POSITION_DELETES || schema != null, "Invalid schema: null");
+    Preconditions.checkArgument(spec != null, "Invalid partition spec: null");
+    Preconditions.checkArgument(
+        spec.isUnpartitioned() || partition != null,
+        "Invalid partition, does not match spec: %s",
+        spec);
   }
 
   /** Builder for creating {@link DataWriter} instances for writing data files. */
@@ -203,21 +190,9 @@ abstract class FileWriterBuilderImpl<W extends FileWriter<D, ?>, D, S>
 
     @Override
     public DataWriter<D> build() throws IOException {
-      Preconditions.checkState(schema() != null, "Invalid schema for data writer: null");
-      Preconditions.checkArgument(spec() != null, "Invalid partition spec for data writer: null");
-      Preconditions.checkArgument(
-          spec().isUnpartitioned() || partition() != null,
-          "Invalid partition, does not match spec: %s",
-          spec());
-
+      validate();
       return new DataWriter<>(
-          modelWriteBuilder().build(),
-          format(),
-          location(),
-          spec(),
-          partition(),
-          keyMetadata(),
-          sortOrder());
+          modelWriteBuilder.build(), format, location, spec, partition, keyMetadata, sortOrder);
     }
   }

@@ -227,33 +202,16 @@ abstract class FileWriterBuilderImpl<W extends FileWriter<D, ?>, D, S>
   private static class EqualityDeleteWriterBuilder<D, S>
       extends FileWriterBuilderImpl<EqualityDeleteWriter<D>, D, S> {
 
-    private int[] equalityFieldIds = null;
-
     private EqualityDeleteWriterBuilder(FormatModel<D, S> model, EncryptedOutputFile outputFile) {
       super(model, outputFile, FileContent.EQUALITY_DELETES);
     }
 
-    @Override
-    public EqualityDeleteWriterBuilder<D, S> equalityFieldIds(int... fieldIds) {
-      this.equalityFieldIds = fieldIds;
-      return this;
-    }
-
     @Override
     public EqualityDeleteWriter<D> build() throws IOException {
-      Preconditions.checkState(schema() != null, "Invalid schema for equality delete writer: null");
-      Preconditions.checkState(
-          equalityFieldIds != null, "Invalid delete field ids for equality delete writer: null");
-      Preconditions.checkArgument(
-          spec() != null, "Invalid partition spec for equality delete writer: null");
-      Preconditions.checkArgument(
-          spec().isUnpartitioned() || partition() != null,
-          "Invalid partition, does not match spec: %s",
-          spec());
-
+      validate();
       return new EqualityDeleteWriter<>(
-          modelWriteBuilder()
-              .schema(schema())
+          modelWriteBuilder
+              .schema(schema)
               .meta("delete-type", "equality")
               .meta(
                   "delete-field-ids",
@@ -261,12 +219,12 @@ abstract class FileWriterBuilderImpl<W extends FileWriter<D, ?>, D, S>
                       .mapToObj(Objects::toString)
                       .collect(Collectors.joining(", ")))
               .build(),
-          format(),
-          location(),
-          spec(),
-          partition(),
-          keyMetadata(),
-          sortOrder(),
+          format,
+          location,
+          spec,
+          partition,
+          keyMetadata,
+          sortOrder,
           equalityFieldIds);
     }
   }
@@ -284,55 +242,14 @@ abstract class FileWriterBuilderImpl<W extends FileWriter<D, ?>, D, S>
 
     @Override
     public PositionDeleteWriter<D> build() throws IOException {
-      Preconditions.checkArgument(
-          spec() != null, "Invalid partition spec for position delete writer: null");
-      Preconditions.checkArgument(
-          spec().isUnpartitioned() || partition() != null,
-          "Invalid partition, does not match spec: %s",
-          spec());
-
+      validate();
       return new PositionDeleteWriter<>(
-          new PositionDeleteFileAppender<>(
-              modelWriteBuilder().meta("delete-type", "position").build()),
-          format(),
-          location(),
-          spec(),
-          partition(),
-          keyMetadata());
-    }
-
-    private static class PositionDeleteFileAppender<T> implements FileAppender<StructLike> {
-      private final FileAppender<PositionDelete<T>> appender;
-
-      PositionDeleteFileAppender(FileAppender<PositionDelete<T>> appender) {
-        this.appender = appender;
-      }
-
-      @SuppressWarnings("unchecked")
-      @Override
-      public void add(StructLike positionDelete) {
-        appender.add((PositionDelete<T>) positionDelete);
-      }
-
-      @Override
-      public Metrics metrics() {
-        return appender.metrics();
-      }
-
-      @Override
-      public long length() {
-        return appender.length();
-      }
-
-      @Override
-      public void close() throws IOException {
-        appender.close();
-      }
-
-      @Override
-      public List<Long> splitOffsets() {
-        return appender.splitOffsets();
-      }
+          modelWriteBuilder.meta("delete-type", "position").build(),
+          format,
+          location,
+          spec,
+          partition,
+          keyMetadata);
     }
   }
 }

I think this is a bit better and removes some of the casting needed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to avoid changing PositionDeleteWriter, since that could introduce a breaking change for external users who might be using the writer with a StructType appender. Let’s discuss this further in the API PR.

I kept the attributes private and retained the accessor methods (as required by Checkstyle).

Merged the validation logic as suggested.

Copy link
Contributor

@rdblue rdblue left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, I think this is about ready to go in. I think we can finish reviewing the minor items like Javadoc on a focused PR for the formats package. Thanks @pvary!

pvary added a commit to pvary/iceberg that referenced this pull request Feb 5, 2026
@pvary
Copy link
Contributor Author

pvary commented Feb 5, 2026

Thanks @rdblue for the final review!

Pushed the relevant part of this PR to #12774, so we can continue there

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants