學(xué)術(shù)信息

學(xué)術(shù)信息

講座預(yù)告|珞珈經(jīng)管創(chuàng)新論壇第138期——經(jīng)濟學(xué)論壇

發(fā)布時間 :2025-11-05  閱讀:

講座題目:Designing Detection Algorithms for AI-Generated Content: Consumer Inference, Creator Incentives, and Platform Strategy(設(shè)計AI生成內(nèi)容檢測算法:消費者推斷、創(chuàng)作者激勵與平臺策略)

主講人:柯特  香港中文大學(xué)商學(xué)院

講座時間:2025年11月13日10:00

講座地點:學(xué)院231

講座內(nèi)容摘要:

Generative AI has transformed content creation, enhancing efficiency and scalability across media platforms. However, it also introduces substantial risks, particularly the spread of misinformation that can undermine consumer trust and platform credibility. In response, platforms deploy detection algorithms to distinguish AI-generated from human-created content, but these systems face inherent trade-offs: aggressive detection lowers false negatives (failing to detect AI-generated content) but raises false positives (misclassifying human-created content), discouraging truthful creators. Conversely, conservative detection protects creators but weakens the informational value of labels, eroding consumer trust. We develop a model in which a platform sets the detection threshold, consumers infer credibility from labels when deciding whether to engage, and creators choose whether to adopt AI and how much effort to exert to create content. A central insight is that equilibrium structure shifts across regimes as the threshold changes. At low thresholds, consumers trust human labels and partially engage with AI-labeled content, disciplining AI misuse and boosting engagement. At high thresholds, this inference breaks down, AI adoption rises, and both trust and engagement collapse. Thus, the platform’s optimal detection strategy balances these forces, choosing a threshold that preserves label credibility while aligning creator incentives with consumer trust. Our analysis shows how detection policy shapes content creation, consumer inference, and overall welfare in two-sided content markets.

生成式人工智能已徹底改變內(nèi)容創(chuàng)作領(lǐng)域,顯著提升了媒體平臺的效率與可擴展性。然而,這項技術(shù)也帶來重大風(fēng)險,尤其是錯誤信息的傳播可能侵蝕消費者信任與平臺公信力。對此,平臺方通過部署檢測算法來區(qū)分AI生成內(nèi)容與人類創(chuàng)作內(nèi)容,但這類系統(tǒng)面臨固有矛盾:激進檢測雖能降低漏判率(未能識別AI內(nèi)容),卻會提高誤判率(錯誤歸類人類創(chuàng)作內(nèi)容),從而挫傷誠實創(chuàng)作者的積極性;反之,保守檢測雖保護創(chuàng)作者,卻會削弱標簽的信息價值,逐漸瓦解消費者信任。我們構(gòu)建的模型中,平臺設(shè)定檢測閾值,消費者根據(jù)內(nèi)容標簽推斷可信度以決定參與程度,創(chuàng)作者則需選擇是否采用AI技術(shù)及投入多少創(chuàng)作精力。核心發(fā)現(xiàn)在于:隨著閾值變化,均衡狀態(tài)會在不同機制間轉(zhuǎn)換。當(dāng)閾值較低時,消費者信任“人類創(chuàng)作”標簽并對“AI生成”內(nèi)容保持有限參與,這種機制既能約束AI濫用又可提升參與度;而當(dāng)閾值過高時,此種推斷機制失效,AI使用率激增,最終導(dǎo)致信任體系與參與度雙雙崩塌。因此,平臺的最優(yōu)檢測策略需在多重力量間取得平衡,通過設(shè)定合理閾值既維持標簽可信度,又使創(chuàng)作者激勵與消費者信任形成協(xié)同。本研究揭示了檢測政策如何雙向塑造內(nèi)容創(chuàng)作生態(tài)、消費者推斷模式以及雙邊內(nèi)容市場的整體效益。

主講人學(xué)術(shù)簡介:

柯特,香港中文大學(xué)商學(xué)院市場學(xué)教授兼系主任,在加州大學(xué)伯克利分校取得運籌學(xué)博士、統(tǒng)計學(xué)碩士和經(jīng)濟學(xué)碩士學(xué)位,以及在北京大學(xué)取得物理學(xué)學(xué)士和統(tǒng)計學(xué)學(xué)士學(xué)位。研究領(lǐng)域涵蓋量化營銷模型、微觀經(jīng)濟理論和產(chǎn)業(yè)組織。近期研究重點是消費者搜索、在線廣告和平臺,以及隱私、數(shù)據(jù)和算法經(jīng)濟學(xué)。在加入香港中文大學(xué)之前,他曾于麻省理工學(xué)院斯隆管理學(xué)院擔(dān)任助理教授五年。目前擔(dān)任Marketing Science、Management Science、Journal of Marketing Research和Quantitative Marketing and Economics等雜志的副主編。柯特教授關(guān)于數(shù)字經(jīng)濟的研究于 2024 年獲得國家自然科學(xué)基金青年人才項目資助,作為受邀專家參與“工商管理學(xué)科發(fā)展戰(zhàn)略及十五五發(fā)展規(guī)劃”研討會。


上一條:講座預(yù)告|珞珈經(jīng)濟論壇第96期

下一條:講座預(yù)告|珞珈保險論壇第三十二期

澄江县| 东港市| 襄城县| 周至县| 广河县| 平陆县| 都兰县| 鹤壁市| 桓台县| 永德县| 孙吴县| 平远县| 泾源县| 宁远县| 扶风县| 额敏县| 普洱| 长顺县| 曲水县| 永登县| 桃江县| 温州市| 怀集县| 郴州市| 凤冈县| 延吉市| 铜鼓县| 翁牛特旗| 望谟县| 灌云县| 孟连| 包头市| 宁武县| 漳州市| 江口县| 富顺县| 麻栗坡县| 五家渠市| 双牌县| 阜新市| 青阳县|