🤖 AI Summary
This study addresses the challenge of enabling non-technical users to rapidly build personalized content classifiers in fragmented social media contexts. Motivated by requirements for short conversational interactions, low entry barriers, and frequent iterative refinement, we comparatively evaluate three initialization strategies—example-based labeling, rule authoring, and large language model (LLM) prompting—and conduct a real-user experiment (N=37) under an A/B evaluation framework. We empirically reveal strong context dependence in initialization performance: while LLM prompting achieves overall best results, its superiority varies across tasks; all three approaches encounter convergence bottlenecks during iterative optimization; and users spontaneously adopt hybrid strategies (e.g., embedding examples into prompts), significantly improving usability and convergence efficiency. Our core contribution is the empirical validation of “lightweight personalized modeling” as a viable paradigm for end users, and the proposal of a hybrid initialization framework tailored to non-expert practitioners.
📝 Abstract
Existing tools for laypeople to create personal classifiers often assume a motivated user working uninterrupted in a single, lengthy session. However, users tend to engage with social media casually, with many short sessions on an ongoing, daily basis. To make creating personal classifiers for content curation easier for such users, tools should support rapid initialization and iterative refinement. In this work, we compare three strategies -- (1) example labeling, (2) rule writing, and (3) large language model (LLM) prompting -- for end users to build personal content classifiers. From an experiment with 37 non-programmers tasked with creating personalized moderation filters, we found that participants preferred different initializing strategies in different contexts, despite LLM prompting's better performance. However, all strategies faced challenges with iterative refinement. To overcome challenges in iterating on their prompts, participants even adopted hybrid approaches such as providing examples as in-context examples or writing rule-like prompts.