![]() : CreateTableAsSelect should have metrics update too.: Cannot drop/add columns from/to a dataset of v2 DESCRIBE TABLE.: Checking duplicate static partition columns doesn’t respect case sensitive conf.: Resolve metadata output from DataFrame.: Skip InSet null value during push filter to Hive metastore.: Resolve using child metadata attributes as fallback.: Fix inconsistent results when applying 2 Python UDFs with different return type to 2 columns together.: Respect case sensitivity in V1 ALTER TABLE.: New protocol FetchShuffleBlocks in OneForOneBlockFetcher lead to data loss or correctness.: Fix NPE if InSet contains null value during getPartitionsByFilter.: Avoid unnecessary view resolving and remove the performCheck flag.: JDBC connection provider is not removing kerberos credentials from JVM security context.: Table maybe resolved as a view if the table is dropped.: Correct the active SparkSession for streaming query.: Avoid NPE in DataFrameReader.schema(StructType).: DataFrameNaFunctions.fillMap(values: Seq) fails for column name having a dot. ![]() : Invalid ID for offset-based ZoneId since Spark 3.0.: Dynamic allocation on K8s kills executors with running tasks.: V2 Datasources that extend FileScan preclude exchange reuse.We strongly recommend all 3.1 users to upgrade to this stable release. This release is based on the branch-3.1 maintenance branch of Spark. Spark 3.1.2 is a maintenance release containing stability fixes.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |