Tredyffrin Township Libraries, partnering with Search with Scout, will pilot a conversational discovery assistant that lets public library patrons search across the entire library collection (physical catalog, OverDrive, Hoopla, and licensed databases) through a single natural-language interface. Patrons describe what they need in their own words, in any of twelve languages, and receive unified results with real-time availability in under three seconds. The project addresses a persistent national problem: patrons must navigate four or more disconnected discovery systems to find materials their library already owns, and most give up before they succeed. Over fourteen months, Tredyffrin will deploy the assistant, measure impact on discovery success, digital collection use, and equitable access, and publish a replication guide for public libraries nationwide. The assistant is built by two working librarians and logs zero patron queries, aligning technical design with the profession's commitment to intellectual freedom.
Ask Once, Find Everything: A Conversational Discovery Pilot for Public Libraries at Tredyffrin Township Libraries
Alternate working titles under consideration:
Public libraries serve as one of the last free, equitable points of access to information in American communities. The nation's approximately 9,250 public library systems collectively spend hundreds of millions of dollars each year on digital collections, licensed databases, and physical materials. Yet the tools patrons use to search those collections have not kept pace with the discovery experiences patrons encounter everywhere else in their lives.
A typical public library patron must navigate four or more disconnected systems to find materials their library already owns. The online public access catalog searches print and media holdings. OverDrive has its own application for ebooks and digital audiobooks. Hoopla has another. Licensed databases covering genealogy, consumer health, small business research, legal forms, and local history each have separate interfaces, separate search logic, and separate authentication. A patron who wants "a good audiobook for a long car ride in Spanish" must search four to five systems independently, assuming the patron knows those systems exist at all.
The outcome is predictable. Most patrons never venture beyond the front-page catalog. Digital collection use, despite representing a significant share of most libraries' acquisition budgets, remains a fraction of what collection development officers target. Reference librarians spend meaningful portions of their shifts performing discovery searches that patrons could perform themselves if the tools were more forgiving.
This failure is not distributed evenly. Patrons who rely on libraries most are the ones current discovery tools fail most often. Non-English-speaking patrons face interfaces available only in English and controlled vocabulary rooted in Anglophone subject headings. Patrons with visual, motor, or cognitive impairments encounter accessibility barriers that vary across each platform, so a patron who masters OverDrive still has to relearn Hoopla. Rural and small libraries without dedicated reference staff have no professional to bridge the gap between a patron's need and the library's holdings during weekend or evening hours. Teens and new adults, trained by commercial platforms to expect instant, conversational, comprehensive results, read the friction of legacy catalog search as a signal that the library has nothing for them.
Every day that libraries cannot offer a comparable discovery experience, they lose relevance in the eyes of patrons, boards, and funders, and with it the political support and appropriations that sustain public library service.
The field has tried to solve this with federated search, discovery layers, and improved faceted navigation. Libraries have spent two decades deploying those approaches with limited patron adoption. Reviewers of this proposal will likely recognize the pattern: a vendor layer is installed, staff are trained, a six-month circulation bump is observed, and patron behavior then returns to baseline because the underlying interaction model, keyword input evaluated against controlled vocabulary, still does not match how patrons actually describe what they want.
The gap is not about database coverage. It is about the interaction layer itself. Patrons need a single point of entry that understands their plain-language request, searches every collection their library offers, and returns real-time availability in seconds, on any device, in any language.
This project responds directly to IMLS's strategic commitment to supporting libraries as civic centers of learning and to advancing the adoption of emerging technologies that measurably improve public library service. Recent IMLS guidance to the field has identified large language models and related language technologies as a priority area, urging the field to explore responsible, privacy-preserving applications of those tools in library service. This proposal paraphrases that guidance rather than quoting it, but the alignment is direct: Search with Scout is a privacy-first, accessibility-first, practitioner-built application of conversational language technology to the most common patron workflow in any public library.
Three factors make this the right moment for a Community-Centered Implementation pilot. First, large language models have reached a level of capability where natural-language library search is technically feasible, affordable, and reliable for the first time. Second, public concern about patron privacy and responsible technology use is high, and public libraries are uniquely positioned to model privacy-preserving, intellectual-freedom-aligned deployment of conversational tools. Third, the field has an active IMLS signal to explore these approaches, and a practitioner-led team ready to deploy, measure, and disseminate a replicable model.
Tredyffrin Township Libraries will deploy Search with Scout, a conversational discovery assistant, across all Tredyffrin branches and service channels over a fourteen-month pilot. The project is organized into four phases with defined deliverables and milestones at each transition.
The first phase establishes technical integration and collects a clean baseline against which pilot outcomes can be measured. The project team will complete API integration with Tredyffrin's Sierra ILS (the migration target identified in the library's current catalog integration planning) and configure connectors for OverDrive, Hoopla, and the library's licensed databases. Concurrently, Jonathan Trice and the Tredyffrin reference team will capture ninety days of baseline data covering catalog search volume, digital circulation, reference desk question counts, and language coverage of current service. The project team will recruit eight to twelve librarian testers drawn from Tredyffrin staff and neighboring Chester County Library System member libraries. Phase 1 concludes with a staff orientation session and a go/no-go review of integration completeness.
The assistant is deployed first to staff-only terminals. Recruited testers run structured query protocols that exercise vocabulary variety, language coverage, accessibility features, and edge cases (ambiguous requests, multi-language queries, requests that should be handed off to a reference librarian). Weekly feedback sessions surface defects and response-quality issues, which are triaged and patched. Patron-facing deployment begins at one to two branches toward the end of Phase 2 so the team can observe real patron interactions before full rollout. Phase 2 concludes with an interim evaluation report and a decision gate on full deployment.
The assistant is made available to all Tredyffrin patrons through the library website and in-branch kiosks. Anonymized, aggregated usage analytics are captured monthly. Patron intercept surveys are administered each quarter. Staff surveys are administered in month eight and month twelve. An external evaluator reviews progress at the midpoint and collects qualitative data through structured staff interviews. Phase 3 is where the project produces most of its evidence base for effectiveness and equity.
Dissemination overlaps Phase 3 deliberately, so findings reach the field as they emerge rather than only at project close. Deliverables include a replicable implementation guide for other public libraries, a public evaluation report with anonymized data, presentations at the Pennsylvania Library Association annual conference, the Public Library Association conference, and the American Library Association annual conference, and a free webinar series for library directors evaluating conversational discovery tools. A final report is submitted to IMLS.
Search with Scout is designed to serve the patrons current library discovery tools fail most often. This plan summarizes how the pilot will advance IMLS's commitment to ensuring that all people have access to information and ideas.
The assistant accepts voice and text input in twelve languages: Spanish, Mandarin, Cantonese, Korean, Vietnamese, Arabic, Russian, Haitian Creole, Portuguese, French, Tagalog, and English. Non-English-speaking patrons can describe what they need in their own language, including idiomatic and colloquial phrasings that controlled vocabulary cannot match. Results are returned with translated metadata where available and preserve the patron's language throughout the dialog.
The assistant meets WCAG 2.2 Level AA standards. It is fully navigable by keyboard and compatible with JAWS, NVDA, and VoiceOver. High-contrast mode and scalable text are built in. Voice input provides an alternative for patrons who cannot or prefer not to type. These are not add-on features; they are baseline requirements of the project and part of every release the team ships.
Low-income patrons, immigrants, and members of marginalized communities are disproportionately harmed by surveillance. The assistant logs zero patron queries. No search content, no patron identity, no IP address beyond session, and no reading history is stored in any database. This design is rooted in Article VII of the American Library Association Library Bill of Rights and the profession's commitment to intellectual freedom. It is a precondition of the project, not a feature to be negotiated.
Evaluation metrics explicitly track usage by language of query (not patron identity), usage of accessibility features (voice input, screen reader, large text), and adoption at branches serving lower-income communities. The project team will not collect patron demographic data, which would violate the privacy commitment. Instead, language and accessibility signals function as aggregate equity indicators.
Project results are organized in four measurement areas. Each area pairs a target with an instrument, a baseline, and a responsible team member.
The project will measure whether the assistant succeeds at the core job. Target: at least 85 percent of patron queries return at least one relevant result. Instrument: anonymized query-outcome logs (query text is not stored; only outcome flags and relevance classification produced by the assistant itself). Patron satisfaction target: at least 4.0 on a 5-point scale from optional post-session micro-surveys. Baseline: Phase 1 catalog failure rates captured from reference desk interaction logs during the ninety-day pre-deployment window.
Target: a measurable increase in digital collection circulation (OverDrive and Hoopla checkouts) and in database session counts after full deployment, compared with the ninety-day baseline. Instrument: vendor-provided usage reports that Tredyffrin already receives. The project will also track cross-format discovery rate, the percentage of assistant responses that surface materials outside the physical catalog, with a target of at least 40 percent.
Target: measurable usage in non-English languages, proportional to the community's language profile, and measurable use of accessibility features. Instrument: aggregate session-level signals (language of query, voice input active, screen reader active). No individual patron identity is captured.
Target: a 20 to 30 percent reduction in routine directional and discovery reference questions at the Tredyffrin reference desk during Phase 3, measured against the Phase 1 baseline. A corresponding qualitative target: staff report that freed time is being redirected to programming, collection development, outreach, and complex reference work. Instrument: the existing Tredyffrin reference statistics log and structured staff interviews at months 8 and 12.
The project will use a pre/post comparison design anchored by the ninety-day baseline. Quantitative data is collected monthly. Patron intercept surveys are administered quarterly. Staff surveys are administered twice. An external evaluator, identified during Phase 1, will review data at month 7 (midpoint) and month 13 (final), and will co-author the final evaluation report. The evaluation framework will be released under a Creative Commons license so other libraries can adopt it.
Tredyffrin requests $100,000 for a fourteen-month Community-Centered Implementation pilot. The Community-Centered Implementation track does not require cost share at this level. Tredyffrin will contribute in-kind support, described in the Organizational Capacity section, on top of the requested amount.
| Category | Amount | Purpose and Justification |
|---|---|---|
| Discovery assistant infrastructure and language model inference | $18,000 | Language model API calls, hosting, content delivery, monitoring, and logging infrastructure for the privacy-preserving session layer. Inference is the largest direct operating expense of the assistant. |
| Librarian tester stipends | $24,000 | Twelve working librarians at $2,000 each, compensated for structured testing protocols, weekly feedback sessions, and documentation over six months. Funds flow directly to library workers, consistent with IMLS's field-leadership framing. |
| Technical partner compensation (integration and support) | $30,000 | Two developer-librarians at $15,000 each for ILS integration, OverDrive/Hoopla connector work, bug fixes, deployment, and staff support across all fourteen months. Ensures the project is delivered by practitioners, not contracted out to a vendor unfamiliar with library workflow. |
| External evaluation | $10,000 | Independent midpoint and final evaluation by an external evaluator identified during Phase 1. Covers instrument design review, structured staff interviews, data analysis, and co-authored final report. |
| Travel and dissemination | $8,000 | Registration and travel for presentations at the Pennsylvania Library Association, Public Library Association, and American Library Association annual conferences. Webinar platform fees for the free director-facing webinar series. |
| Supplies, survey tools, and accessibility testing software | $5,000 | Licensed survey platform for patron and staff surveys, accessibility testing tools (beyond the free baseline), and testing devices for staff and branch kiosks. |
| Indirect costs | $5,000 | Tredyffrin Township Libraries administrative overhead at [CONFIRM TREDYFFRIN RATE]. Defaults to the federal de minimis rate of 15% of modified total direct costs if no negotiated rate exists. |
| Total requested | $100,000 |
Approximately 54 percent of direct funding in this budget is paid directly to library workers (librarian testers plus the partner team, both of whom are credentialed librarians). The project team considers this a feature, not a side effect. IMLS funds should reach the people doing the work.
| Month | Phase | Key Activities | Deliverable |
|---|---|---|---|
| 1 to 3 | Phase 1 | ILS integration, connector configuration, baseline data capture, staff orientation, tester recruitment | Integration readiness memo; baseline report |
| 4 to 6 | Phase 2 | Staff-only soft launch, structured testing, weekly iteration, one to two branch rollout | Interim evaluation report; rollout decision memo |
| 7 to 12 | Phase 3 | Full public deployment, monthly analytics, quarterly patron surveys, month 7 external midpoint review | Midpoint external evaluation; staff survey results |
| 10 to 14 | Phase 4 | Drafting replication guide, conference presentations, webinar series, final report | Replication guide; final evaluation report; IMLS final report |
Tredyffrin Township Libraries serves the residents of Tredyffrin Township and surrounding communities in Chester County, Pennsylvania, through [CONFIRM: two branches, service population, annual circulation, annual operating budget]. The library is a member of the Chester County Library System, a cooperative of eighteen member libraries, and operates under the Pennsylvania Library Code with annual reporting to the Pennsylvania Office of Commonwealth Libraries. Tredyffrin has an established record of technology adoption and of collaborative work with neighboring libraries on shared services, professional development, and disaster response. The library currently employs [CONFIRM FTE COUNT] staff and offers programming and reference services in person and online. Tredyffrin will provide in-kind staff time, access to its integrated library system and digital vendor accounts, patron feedback infrastructure, and board-level institutional commitment for the duration of the pilot.
Search with Scout is a practitioner-built conversational discovery assistant for public libraries. The project was founded by two working librarians who identified the discovery gap through daily reference shifts and built the assistant from inside the profession. Scout's architecture is privacy-first, accessibility-first, and vendor-agnostic across integrated library systems and digital collection providers. A working demonstration is live for reviewer evaluation. The team contributes directly to the pilot as the technical partner under this grant and is committed to the open-source release of the evaluation framework and replication guide produced by the project.
Mallory Hoffman, Director of Tredyffrin Township Libraries. Mallory brings more than twenty years of public library leadership experience, including prior service as Executive Director of another Pennsylvania public library. She holds an MLIS from Kutztown University. As institutional lead, Mallory will chair the project steering committee, sign off on staff time allocations, and review major deliverables.
Jonathan Trice, Head of Reference and Adult Services, Tredyffrin Township Libraries. Jonathan holds an MLIS from Drexel University and leads Tredyffrin's reference and adult services team. Jonathan brings frontline expertise in patron discovery, prior grant writing experience, and direct authority to coordinate staff testing and feedback. Jonathan is the day-to-day grant lead for this project.
Drew Garraway, Reference Librarian, Tredyffrin Township Libraries, and co-founder, Search with Scout. Drew works the Tredyffrin reference desk, where the discovery problem presented itself as a daily frustration. Drew is responsible for the assistant's conversation design, librarian-facing onboarding, and library community engagement during the pilot.
Kenny Allen, co-founder, Search with Scout. Kenny is a former librarian turned product designer and brings both library domain expertise and technology product development experience. Kenny is responsible for product and user experience design, technical architecture, and dissemination materials.
The pilot is designed so that successful deployment at Tredyffrin creates a self-sustaining path forward without requiring future IMLS funding for continued operation at the pilot site.
Operational sustainability at Tredyffrin. After the pilot period, Tredyffrin can continue operating the assistant through a standard library technology subscription sized to its service population, which is comparable to existing library technology line items. The pilot provides the evidence Tredyffrin's board needs to evaluate the subscription against measured outcomes rather than a vendor promise.
Replication at other libraries. The replication guide produced in Phase 4 is the project's main sustainability contribution to the field. It captures everything another library needs to evaluate, plan, and deploy a conversational discovery assistant: technical prerequisites, change management recommendations, evaluation instruments, staff training notes, and common pitfalls. The guide is released under a Creative Commons license and hosted through IMLS-recognized channels (IMLS project pages, the American Library Association, and state library association repositories).
Open evaluation framework. The evaluation framework developed during this project (survey instruments, measurement definitions, dashboards, and the open-source analytics notebook) will be released as a standalone toolkit. Libraries considering any conversational discovery tool, not only Search with Scout, can use this framework to evaluate vendor claims against their own patron population.
Network effects across libraries. Because the assistant's architecture is vendor-agnostic, deployments at additional libraries contribute back to shared improvements in connector coverage, accessibility, and multilingual response quality. Tredyffrin's pilot data (aggregate and anonymized) becomes a reference point that other libraries can cite in their own planning.
Follow-on funding. A successful Community-Centered Implementation pilot creates the evidence base needed for larger federal and foundation applications that can support multi-library scaling, including a possible future National Implementation track proposal at the $300,000 level. The project team has already mapped compatible follow-on opportunities.
This project will produce the following digital products, all released under open licenses so that the field can freely adopt and adapt them.
No patron-identifiable data, no individual query text, and no session-level content will be released in any digital product. All published metrics are aggregated at or above the weekly branch level. The open evaluation framework is designed to enforce this aggregation by default so libraries adopting it cannot accidentally over-collect.
Digital products will be deposited with the Internet Archive and with a disciplinary repository (for example, the Institute of Museum and Library Services project archive and the Humanities Commons Digital Library Federation collection) so access is durable beyond the Tredyffrin website or the Scout project site.
Full two-page biographical sketches will be attached per IMLS format requirements for each of the four team members listed above (Mallory Hoffman, Jonathan Trice, Drew Garraway, Kenny Allen). A short summary of each is included in the Organizational Capacity section. [DRAFT BIOS IN APPENDIX; CONFIRM IMLS FORMAT IN FY2027 NOFO]
| Attachment | Owner | Status |
|---|---|---|
| Biographical sketches (4 team members) | Drew / Jonathan | Draft needed |
| Letters of support (5) | Jonathan | Requests to send by August 2026 |
| Tredyffrin organizational profile and annual report | Mallory | Pending |
| Budget detail worksheet (IMLS form) | Jonathan | To complete against FY2027 NOFO form |
| Schedule of completion (Gantt format) | Drew | Draft inline; format to match NOFO |
| Digital products plan (standalone) | Kenny | Draft inline; extract to standalone |
| Data management plan | Kenny | Pending; confirm requirement in FY2027 NOFO |
| Indirect cost rate agreement or de minimis attestation | Mallory | Pending |
The project team will include a formal references section in the final narrative, covering American Library Association Library Bill of Rights (Article VII), the Public Library Association's planning documents on discovery, WCAG 2.2 Level AA accessibility guidelines, and the most recent Pennsylvania Office of Commonwealth Libraries state plan. Citations will be formatted per the FY2027 NOFO instructions once published.