Open
Description
Is your feature request related to a problem? Please describe.
Add ability to chunk out the text in a PDF into a dataframe / delta lake table to enable building RAG applications / semantic search applications.
Have controls for overlap and chunk size.
Add capability to Pixels to index and collect header metadata from PDFs.
Add capability to run embedding on chunked out text.
The value add is minimal lines of code to scale up processing 100k+ PDF files.
Sample python splitting code take from this volumes blog
@udf('array<string>')
def gen_chunks(path: str) -> list[str]:
from pdfminer.high_level import extract_text
from langchain.text_splitter import TokenTextSplitter
text = extract_text(path)
splitter = TokenTextSplitter(chunk_size = 500, chunk_overlap = 50)
return [doc.page_content for doc in splitter.create_documents([text])]
The solution should be a pluggable SparkML Transformer