学术信息 首页 - 学术信息 - 正文
讲座预告|珞珈经管创新论坛第138期——经济学论坛
2025-11-13
时间:2025-11-05  阅读:

讲座题目:Designing Detection Algorithms for AI-Generated Content: Consumer Inference, Creator Incentives, and Platform Strategy(设计AI生成内容检测算法:消费者推断、创作者激励与平台策略)

主讲人:柯特  香港中文大学商学院

讲座时间:2025年11月13日10:00

讲座地点:学院231

讲座内容摘要:

Generative AI has transformed content creation, enhancing efficiency and scalability across media platforms. However, it also introduces substantial risks, particularly the spread of misinformation that can undermine consumer trust and platform credibility. In response, platforms deploy detection algorithms to distinguish AI-generated from human-created content, but these systems face inherent trade-offs: aggressive detection lowers false negatives (failing to detect AI-generated content) but raises false positives (misclassifying human-created content), discouraging truthful creators. Conversely, conservative detection protects creators but weakens the informational value of labels, eroding consumer trust. We develop a model in which a platform sets the detection threshold, consumers infer credibility from labels when deciding whether to engage, and creators choose whether to adopt AI and how much effort to exert to create content. A central insight is that equilibrium structure shifts across regimes as the threshold changes. At low thresholds, consumers trust human labels and partially engage with AI-labeled content, disciplining AI misuse and boosting engagement. At high thresholds, this inference breaks down, AI adoption rises, and both trust and engagement collapse. Thus, the platform’s optimal detection strategy balances these forces, choosing a threshold that preserves label credibility while aligning creator incentives with consumer trust. Our analysis shows how detection policy shapes content creation, consumer inference, and overall welfare in two-sided content markets.

生成式人工智能已彻底改变内容创作领域,显著提升了媒体平台的效率与可扩展性。然而,这项技术也带来重大风险,尤其是错误信息的传播可能侵蚀消费者信任与平台公信力。对此,平台方通过部署检测算法来区分AI生成内容与人类创作内容,但这类系统面临固有矛盾:激进检测虽能降低漏判率(未能识别AI内容),却会提高误判率(错误归类人类创作内容),从而挫伤诚实创作者的积极性;反之,保守检测虽保护创作者,却会削弱标签的信息价值,逐渐瓦解消费者信任。我们构建的模型中,平台设定检测阈值,消费者根据内容标签推断可信度以决定参与程度,创作者则需选择是否采用AI技术及投入多少创作精力。核心发现在于:随着阈值变化,均衡状态会在不同机制间转换。当阈值较低时,消费者信任“人类创作”标签并对“AI生成”内容保持有限参与,这种机制既能约束AI滥用又可提升参与度;而当阈值过高时,此种推断机制失效,AI使用率激增,最终导致信任体系与参与度双双崩塌。因此,平台的最优检测策略需在多重力量间取得平衡,通过设定合理阈值既维持标签可信度,又使创作者激励与消费者信任形成协同。本研究揭示了检测政策如何双向塑造内容创作生态、消费者推断模式以及双边内容市场的整体效益。

主讲人学术简介:

柯特,香港中文大学商学院市场学教授兼系主任,在加州大学伯克利分校取得运筹学博士、统计学硕士和经济学硕士学位,以及在北京大学取得物理学学士和统计学学士学位。研究领域涵盖量化营销模型、微观经济理论和产业组织。近期研究重点是消费者搜索、在线广告和平台,以及隐私、数据和算法经济学。在加入香港中文大学之前,他曾于麻省理工学院斯隆管理学院担任助理教授五年。目前担任Marketing Science、Management Science、Journal of Marketing Research和Quantitative Marketing and Economics等杂志的副主编。柯特教授关于数字经济的研究于 2024 年获得国家自然科学基金青年人才项目资助,作为受邀专家参与“工商管理学科发展战略及十五五发展规划”研讨会。

Baidu
map