In spark iterate through each column and find the max length

I am new to spark scala and I have following situation as below I have a table "TEST_TABLE" on cluster(can be hive table) I am converting that to dataframe as:

scala> val testDF = spark.sql("select * from TEST_TABLE limit 10")

Now the DF can be viewed as

scala> testDF.show()

COL1|COL2|COL3

abc|abcd|abcdef
a|BCBDFG|qddfde
MN|1234B678|sd

I want an output like below

COLUMN_NAME|MAX_LENGTH
COL1|3
COL2|8
COL3|6

Is this feasible to do so in spark scala?

#scala #apache-spark

4 Likes123.15 GEEK