A new framework restructures enterprise workflows into LLM-friendly knowledge representations to improve customer support automation. By introducing intent-based reasoning formats and synthetic training pipelines, the study enhances model interpretability, accuracy, and scalability, enabling more efficient AI-driven decision-making across complex operational environments.
— At the 31st International Conference on Computational Linguistics (COLING 2025), researchers affiliated with a leading Silicon Valley technology company presented a study titled โLLM-Friendly Knowledge Representation for Customer Support,โ which explores a new framework designed to help Large Language Models (LLMs) interpret and apply enterprise workflows more effectively. The research introduces an approach that restructures complex operational processes to improve the scalability and performance of AI-driven support systems.
A central contribution of the study is the Intent, Context, and Action (ICA) format, which restructures operational workflows into a pseudocode-style representation optimized for LLM comprehension. Experiments reported in the paper show that ICA improves model interpretability and enables more accurate action predictions, achieving up to a 25 percent accuracy gain and a 13 percent reduction in manual processing time. The findings show that the ICA methodology sets a new benchmark for its application in customer support and provides a foundation for extending business-knowledge reformatting to complex domains such as legal and finance.
The study also addresses dataset limitations by introducing a synthetic data generation pipeline that supports supervised fine-tuning with minimal human involvement. The method produces training instances by simulating user queries, contextual conditions, and decision-tree structures, enabling LLMs to learn reasoning patterns aligned with real-world support scenarios. According to the experiments, this approach reduces training costs and allows smaller open-source models to approach the performance and latency of larger systems, representing a meaningful advancement in scalable enterprise AI development.
Among the authors, Hanchen Su is a Staff Machine Learning Engineer whose work focuses on machine learning, natural language processing, and statistical learning. He holds an M.S. in Artificial Intelligence from Peking University and has contributed to projects involving intelligent customer service, pricing strategy, recommendation systems, and market intelligence during his tenure across different roles. His technical experience spans deep learning, Spark, SQL, Java, Python, and large-scale data processing tools such as Airflow, Bighead, and Bigqueue.
Su previously worked as a Staff Data Scientist at the Beijing office of a leading Silicon Valley technology company, where he developed listing verification, pricing strategy, price suggestion, market intelligence, market understanding, and trending predictions, as well as search ranking and recommendation systems. His earlier roles include Senior Machine Learning Engineer at Meituan, Machine Learning Engineer at Yidianzixun, and Machine Learning Engineer at Sohu. He also co-founded Leappmusic as its Tech Lead, leading a team with 20+ engineers on the crawler, recommendation system, backend service, and content management service for a mobile app.
The study concludes that the ICA methodology provides a replicable framework for integrating structured reasoning into LLM-based systems. By reformulating operational knowledge into a format optimized for machine interpretation, the research outlines a foundation for future AI applications capable of supporting complex decision-making with improved accuracy, transparency, and efficiency.
Contact Info:
Name: Hanchen Su
Email: Send Email
Organization: Hanchen Su
Website: https://scholar.google.com/citations?user=Fhg_DhsAAAAJ&hl=en
Release ID: 89180448
If you detect any issues, problems, or errors in this press release content, kindly contact [email protected] to notify us (it is important to note that this email is the authorized channel for such matters, sending multiple emails to multiple addresses does not necessarily help expedite your request). We will respond and rectify the situation in the next 8 hours.


