Author:斋藤康毅
《深度学习进阶:自然语言处理》是《深度学习入门:基于Python 的理论与实现》的续作,围绕自然语言处理和时序数据处理,介绍深度学习中的重要技术,包括word2vec、RNN、LSTM、GRU、seq2seq 和Attention 等。本书语言平实,结合大量示意图和Python代码,按照“提出问题”“思考解决问题的新方法”“加以改善”的流程,基于深度学习解决自然语言处理相关的各种问题,使读者在此过程中更深入地理解深度学习中的重要技术。
Tags
Support Statistics
¥.00 ·
0times
Text Preview (First 20 pages)
Registered users can read the full content for free
Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.
Page
1
(This page has no text content)
Page
2
(This page has no text content)
Page
3
图灵社区的电子书没有采用专有客户 端,您可以在任意设备上,用自己喜 欢的浏览器和PDF阅读器进行阅读。 但您购买的电子书仅供您个人使用, 未经授权,不得进行传播。 我们愿意相信读者具有这样的良知和 觉悟,与我们共同保护知识产权。 如果购买者有侵权行为,我们可能对 该用户实施包括但不限于关闭该帐号 等维权措施,并可能追究法律责任。
Page
4
图 灵 程 序 设 计 丛 书 人 民 邮 电 出 版 社 北 京 [日]斋藤康毅 著 陆宇杰 译 深度学习进阶:自然语言处理 Beijing・Boston・Farnham・Sebastopol・Tokyo O’Reilly Japan, Inc. 授权人民邮电出版社出版 Natural Language Processing
Page
5
内 容 提 要 本书是《深度学习入门:基于 Python 的理论与实现》的续作,围绕自然语言处理 和时序数据处理,介绍深度学习中的重要技术,包括 word2vec、RNN、LSTM、GRU、 seq2seq 和 Attention 等。本书语言平实,通俗易懂,结合大量示意图和 Python 代码,遵 循“提出问题”“思考解决问题的新方法”“加以改善”的方法,基于深度学习解决自然 语言处理相关的各种问题,使读者在此过程中更深入地理解深度学习中的重要技术。 本书适合对自然语言处理感兴趣的读者阅读。 ◆ 著 [日] 斋藤康毅 译 陆宇杰 责任编辑 杜晓静 责任印制 周昇亮 ◆ 人民邮电出版社出版发行 北京市丰台区成寿寺路11号 邮编 100164 电子邮件 315@ptpress.com.cn 网址 https://www.ptpress.com.cn 北京 印刷 ◆ 开本:880×1230 1/32 印张:13.125 彩插:1 字数:394千字 2020年10月第 1 版 印数:1 - 3 500册 2020年10月北京第1次印刷 著作权合同登记号 图字:01-2018-7738号 定价:99.00元 读者服务热线:(010)51095183转600 印装质量热线:(010)81055316 反盗版热线:(010)81055315 广告经营许可证:京东市监广登字20170147号 深度学习进阶 : 自然语言处理 / (日) 斋藤康毅著 ; 陆宇杰译. -- 北京 : 人民邮电出版社, 2020.10 (图灵程序设计丛书) ISBN 978-7-115-54764-4 Ⅰ. ①深… Ⅱ. ①斋… ②陆… Ⅲ. ①自然语言处理 Ⅳ. ①TP391 中国版本图书馆CIP数据核字(2020)第168652号 图书在版编目(CIP)数据 C M Y CM MY CY CMY K CIP.PDF 1 2020/9/4 10:13:58
Page
6
版 权 声 明 © Turing Book/ Posts and Telecommunications Press, 2020. Authorized translation of the Japanese edition of Deep Learning from Scratch 2 © 2018 Koki Saitoh, O’ Reilly Japan, Inc. This translation is published and sold by permission of O’Reilly Japan, Inc., the owner of all rights to publish and sell the same. 简体中文版由人民邮电出版社出版,2020。 日文原版由 O’ Reilly Japan, Inc. 出版,2018。 日文原版的翻译得到 O’ Reilly Japan, Inc. 的授权。 此简体中文版的出版和销售得到出版权和销售权的所有者——O’ Reilly Japan, Inc. 的许可。 版权所有,未得书面许可,本书的任何部分和全部不得以任何形式重制。
Page
7
O’Reilly Media 通过图书、杂志、在线服务、调查研究和会议等方式传播创新知识。 自 1978 年开始,O’Reilly 一直都是前沿发展的见证者和推动者。超级极客们正在开 创着未来,而我们关注真正重要的技术趋势——通过放大那些“细微的信号”来刺激 社会对新科技的应用。作为技术社区中活跃的参与者,O’Reilly 的发展充满了对创新 的倡导、创造和发扬光大。 O’Reilly 为软件开发人员带来革命性的“动物书”;创建第一个商业网站(GNN);组 织了影响深远的开放源代码峰会,以至于开源软件运动以此命名;创立了 Make 杂志, 从而成为 DIY 革命的主要先锋;公司一如既往地通过多种形式缔结信息与人的纽带。 O’Reilly 的会议和峰会集聚了众多超级极客和高瞻远瞩的商业领袖,共同描绘出开创 新产业的革命性思想。作为技术人士获取信息的选择,O’Reilly 现在还将先锋专家的 知识传递给普通的计算机用户。无论是通过书籍出版、在线服务或者面授课程,每一 项 O’Reilly 的产品都反映了公司不可动摇的理念——信息是激发创新的力量。 业界评论 “O’Reilly Radar 博客有口皆碑。” ——Wired “O’Reilly 凭借一系列(真希望当初我也想到了)非凡想法建立了数百万美元的业务。” ——Business 2.0 “O’Reilly Conference 是聚集关键思想领袖的绝对典范。” ——CRN “一本 O’Reilly 的书就代表一个有用、有前途、需要学习的主题。” ——Irish Times “Tim 是位特立独行的商人,他不光放眼于最长远、最广阔的视野,并且切实地按照 Yogi Berra 的建议去做了:‘如果你在路上遇到岔路口,走小路(岔路)。’回顾过去, Tim 似乎每一次都选择了小路,而且有几次都是一闪即逝的机会,尽管大路也不错。” ——Linux Journal O’Reilly Media, Inc.介绍
Page
8
目录 译者序· ······················································· xi 前言························································· xiii 第1章 神经网络的复习· ········································· 1 1.1 数学和Python的复习· ···································· 1 1.1.1 向量和矩阵· ······································ 1 1.1.2 矩阵的对应元素的运算· ····························· 4 1.1.3 广播· ············································ 4 1.1.4 向量内积和矩阵乘积· ······························· 6 1.1.5 矩阵的形状检查· ··································· 7 1.2 神经网络的推理· ········································· 8 1.2.1 神经网络的推理的全貌图· ··························· 8 1.2.2 层的类化及正向传播的实现· ··························14 1.3 神经网络的学习· ·········································18 1.3.1 损失函数· ········································18 1.3.2 导数和梯度· ······································21 1.3.3 链式法则· ········································23 1.3.4 计算图· ··········································24 1.3.5 梯度的推导和反向传播的实现· ························35 1.3.6 权重的更新· ······································39 1.4 使用神经网络解决问题· ···································41 1.4.1 螺旋状数据集· ····································41 1.4.2 神经网络的实现· ···································43 1.4.3 学习用的代码· ····································45 1.4.4 Trainer类· ·······································49 1.5 计算的高速化· ··········································50 1.5.1 位精度· ··········································51
Page
9
目录vi 1.5.2 GPU(CuPy)· ·····································52 1.6 小结· ··················································54 第2章 自然语言和单词的分布式表示· ······························57 2.1 什么是自然语言处理· ·····································57 2.2 同义词词典· ············································59 2.2.1 WordNet ·········································61 2.2.2 同义词词典的问题· ·································61 2.3 基于计数的方法· ·········································63 2.3.1 基于Python的语料库的预处理· ·······················63 2.3.2 单词的分布式表示· ·································66 2.3.3 分布式假设· ······································67 2.3.4 共现矩阵· ········································68 2.3.5 向量间的相似度· ···································72 2.3.6 相似单词的排序· ···································74 2.4 基于计数的方法的改进· ···································77 2.4.1 点互信息· ········································77 2.4.2 降维· ············································81 2.4.3 基于SVD的降维· ··································84 2.4.4 PTB数据集· ······································86 2.4.5 基于PTB数据集的评价· ·····························88 2.5 小结· ··················································91 第3章 word2vec· ··············································93 3.1 基于推理的方法和神经网络· ································93 3.1.1 基于计数的方法的问题· ·····························94 3.1.2 基于推理的方法的概要· ·····························95 3.1.3 神经网络中单词的处理方法· ··························96 3.2 简单的word2vec· ·······································101 3.2.1 CBOW模型的推理· ·······························101 3.2.2 CBOW模型的学习· ·······························106 3.2.3 word2vec的权重和分布式表示························108 3.3 学习数据的准备· ········································110 3.3.1 上下文和目标词· ··································110 3.3.2 转化为one-hot表示· ·······························113 图灵社区会员 Kensuke(cpy4ever@gmail.com) 专享 尊重版权
Page
10
目录 vii 3.4 CBOW模型的实现· ·····································114 3.5 word2vec的补充说明· ····································120 3.5.1 CBOW模型和概率· ·······························121 3.5.2 skip-gram模型· ···································122 3.5.3 基于计数与基于推理· ······························125 3.6 小结· ·················································127 第4章 word2vec的高速化·······································129 4.1 word2vec的改进①· ·····································129 4.1.1 Embedding层 · ···································132 4.1.2 Embedding层的实现· ······························133 4.2 word2vec的改进②· ·····································137 4.2.1 中间层之后的计算问题· ····························138 4.2.2 从多分类到二分类· ································139 4.2.3 sigmoid函数和交叉熵误差· ··························141 4.2.4 多分类到二分类的实现· ····························144 4.2.5 负采样· ·········································148 4.2.6 负采样的采样方法· ································151 4.2.7 负采样的实现· ···································154 4.3 改进版word2vec的学习· ··································156 4.3.1 CBOW模型的实现· ·······························156 4.3.2 CBOW模型的学习代码· ····························159 4.3.3 CBOW模型的评价· ·······························161 4.4 wor2vec相关的其他话题· ·································165 4.4.1 word2vec的应用例·································166 4.4.2 单词向量的评价方法· ······························168 4.5 小结· ·················································170 第5章 RNN· ················································173 5.1 概率和语言模型· ········································173 5.1.1 概率视角下的word2vec· ····························174 5.1.2 语言模型· ·······································176 5.1.3 将CBOW模型用作语言模型?· ······················178 5.2 RNN· ················································181 5.2.1 循环的神经网络· ··································181
Page
11
目录viii 5.2.2 展开循环· ·······································183 5.2.3 Backpropagation Through Time· ·····················185 5.2.4 Truncated BPTT· ································186 5.2.5 Truncated BPTT的mini-batch学习· ··················190 5.3 RNN的实现············································192 5.3.1 RNN层的实现· ···································193 5.3.2 Time RNN层的实现·······························197 5.4 处理时序数据的层的实现· ································202 5.4.1 RNNLM的全貌图· ································202 5.4.2 Time层的实现· ···································205 5.5 RNNLM的学习和评价· ··································207 5.5.1 RNNLM的实现· ··································207 5.5.2 语言模型的评价· ··································211 5.5.3 RNNLM的学习代码· ······························213 5.5.4 RNNLM的Trainer类· ·····························216 5.6 小结· ·················································217 第6章 Gated RNN· ···········································219 6.1 RNN的问题············································220 6.1.1 RNN的复习· ·····································220 6.1.2 梯度消失和梯度爆炸· ······························221 6.1.3 梯度消失和梯度爆炸的原因· ·························223 6.1.4 梯度爆炸的对策· ··································228 6.2 梯度消失和LSTM· ······································229 6.2.1 LSTM的接口· ····································230 6.2.2 LSTM层的结构· ··································231 6.2.3 输出门· ·········································234 6.2.4 遗忘门· ·········································236 6.2.5 新的记忆单元· ···································237 6.2.6 输入门· ·········································238 6.2.7 LSTM的梯度的流动· ······························239 6.3 LSTM的实现···········································240 6.4 使用LSTM的语言模型· ··································248 6.5 进一步改进RNNLM· ····································255 6.5.1 LSTM层的多层化· ································256 图灵社区会员 Kensuke(cpy4ever@gmail.com) 专享 尊重版权
Page
12
目录 ix 6.5.2 基于Dropout抑制过拟合· ···························257 6.5.3 权重共享· ·······································262 6.5.4 更好的RNNLM的实现· ····························263 6.5.5 前沿研究· ·······································269 6.6 小结· ·················································270 第7章 基于RNN生成文本· ·····································273 7.1 使用语言模型生成文本· ··································274 7.1.1 使用RNN生成文本的步骤· ··························274 7.1.2 文本生成的实现· ··································278 7.1.3 更好的文本生成· ··································281 7.2 seq2seq模型············································283 7.2.1 seq2seq的原理· ···································283 7.2.2 时序数据转换的简单尝试· ··························287 7.2.3 可变长度的时序数据· ······························288 7.2.4 加法数据集· ·····································290 7.3 seq2seq的实现· ·········································291 7.3.1 Encoder类· ······································291 7.3.2 Decoder类· ······································295 7.3.3 Seq2seq类· ······································300 7.3.4 seq2seq的评价· ···································301 7.4 seq2seq的改进· ·········································305 7.4.1 反转输入数据(Reverse)· ····························305 7.4.2 偷窥(Peeky)· ····································308 7.5 seq2seq的应用· ·········································313 7.5.1 聊天机器人· ·····································314 7.5.2 算法学习· ·······································315 7.5.3 自动图像描述· ···································316 7.6 小结· ·················································318 第8章 Attention· ·············································321 8.1 Attention的结构· ·······································321 8.1.1 seq2seq存在的问题· ·······························322 8.1.2 编码器的改进· ···································323 8.1.3 解码器的改进①· ··································325 图灵社区会员 Kensuke(cpy4ever@gmail.com) 专享 尊重版权
Page
13
目录x 8.1.4 解码器的改进②· ··································333 8.1.5 解码器的改进③· ··································339 8.2 带Attention的 seq2seq的实现· ·····························344 8.2.1 编码器的实现· ···································344 8.2.2 解码器的实现· ···································345 8.2.3 seq2seq的实现· ···································347 8.3 Attention的评价· ·······································347 8.3.1 日期格式转换问题· ································348 8.3.2 带Attention的 seq2seq的学习· ·······················349 8.3.3 Attention的可视化· ·······························353 8.4 关于Attention的其他话题·································356 8.4.1 双向RNN· ······································356 8.4.2 Attention层的使用方法· ····························358 8.4.3 seq2seq的深层化和 skip connection· ···················360 8.5 Attention的应用· ·······································363 8.5.1 GNMT··········································363 8.5.2 Transformer······································365 8.5.3 NTM· ··········································369 8.6 小结· ·················································373 附录A sigmoid函数和 tanh函数的导数·····························375 A.1 sigmoid函数· ··········································375 A.2 tanh函数· ············································378 A.3 小结· ················································380 附录B 运行WordNet· ·········································381 B.1 NLTK的安装· ·········································381 B.2 使用WordNet获得同义词· ································382 B.3 WordNet和单词网络· ···································384 B.4 基于WordNet的语义相似度· ······························385 附录C GRU· ················································387 C.1 GRU的接口· ··········································387 C.2 GRU的计算图· ········································388 后记·························································391 参考文献· ····················································395
Page
14
译者序 翻译完《深度学习入门:基于Python的理论与实现》后不久,作者斋藤 康毅又高产地出版了续作,专门讲解深度学习在自然语言处理中如何应用。 翻到本书的前言,理查德·费曼所说的“凡我不能创造的,我就不能理解。” (“What I cannot create, I do not understand.”) 这句话跃然纸上,而这句 话在我为前作所写的译者序中也有提及。看来我和作者都比较认同这样一个 观点:如果想真正弄清楚一件事情,就需要躬行实践。这或许也是一种缘分, 所以我决定继续完成续作的翻译工作。 本书延续了前作的理念,但关注的应用领域不同:前作的内容以卷积神 经网络和图像识别为主,而本书则侧重于循环神经网络和自然语言处理。本 书详细介绍了单词向量、LSTM、seq2seq和Attention等自然语言处理中重 要的深度学习技术。 当然,自然语言处理是一个综合性的研究领域,涉及语法、语义和语境 等概念,有许多研究分支。除了深度学习这一研究范式之外,还有基于语言 学、基于规则、基于机器学习的研究范式。本书涉及的单词含义、语言模型、 文本生成只是其研究范围的一小部分。读者如果想更加全面地了解自然语言 处理,还需要阅读更多相关资料。 前作出版后,很多读者在书评网站或者图灵社区反馈翻译得不错。不过, 在翻译前作时,因为过于追求“信”,有时在语句的连贯性上稍有欠缺,个
Page
15
译者序xii 别地方读起来甚至有些别扭。为此,本书的翻译在保证原文含义不变的情况下, 更多地使用了符合中文表达习惯的表述方式,以求读者在阅读本书时,会有 更佳的阅读体验。 本书的翻译由本人独立完成,特别感谢图灵的编辑对全书的审校。最后, 由于译者水平有限,书中难免存在一些错误与疏漏。欢迎各位读者批评指正, 将发现的问题通过图灵社区反馈给我们,以便我们在本书重印时进行改正。 陆宇杰 2020年2月于上海
Page
16
前言 凡我不能创造的,我就不能理解。 ——理查德·费曼 深度学习正在深刻地改变这个世界。没有深度学习,智能手机的语音识 别、Web 的实时翻译、汇率的预测都无从谈起。得益于深度学习,新药的 研发、患者的诊断、汽车的自动驾驶都在渐渐成为现实。除此以外,几乎所 有的高新技术背后都有深度学习的身影。今后,世界将因深度学习而前进得 更远。 本书是《深度学习入门:基于 Python 的理论与实现》的续作,我们将在 前作的基础上讨论深度学习的相关技术。特别是,本书将专注于自然语言处 理和时序数据处理,使用深度学习挑战各种各样的任务。另外,本书继承了 前作“从零开始创建”的理念,让读者充分体验深度学习相关的高新技术。 本书的理念 笔者认为,要深入理解深度学习(或者某个高新技术),“从零开始创建” 的经验非常重要。从零开始创建,是指从自己可以理解的地方出发,在尽量 不使用外部现成品的情况下,实现目标技术。本书的目标就是通过这样的过 程来切实掌握深度学习(而不是仅停留在表面)。
Page
17
前言xiv 说到底,要深入理解某项技术,至少应掌握创建它所需的知识和技能。 因为本书要从零开始创建深度学习,所以我们会编写很多程序,进行很多实 验。这个过程相当费时,有时还很费脑。不过,在这些费时的工作(甚至这 样的工作本身)中包含许多对深入理解技术非常重要的精髓。如此获得的知 识,对于使用既有库、阅读最前沿的论文、开发原创系统都非常有帮助。此 外,最重要的是,一步一步地理解深度学习的结构和原理这件事本身就很 有趣。 进入自然语言处理的世界 本书的主题是基于深度学习的自然语言处理。简言之,自然语言处理是 让计算机理解我们日常所说的语言的技术。让计算机理解我们所说的语言是 一件非常难的事情,同时也非常重要。实际上,自然语言处理技术已经极大 地改变了我们的生活。Web 检索、机器翻译、语音助理,这些对世界产生 了重大影响的技术在底层都用到了自然语言处理技术。 如上所述,我们的生活已经离不开自然语言处理技术。在这个领域中, 深度学习也占有非常重要的地位。实际上,深度学习极大地改善了传统自然 语言处理的性能。比如,谷歌的机器翻译性能就基于深度学习获得了显著 提升。 本书将围绕自然语言处理和时序数据处理,来介绍深度学习的重要技 巧,具体包括 word2vec、RNN、LSTM、GRU、seq2seq 和 Attention 等。 本书将尽可能地用简洁的语言来解释这些技术,并通过实际创建它们来帮助 读者加深理解。另外,通过实验,我们将实际感受到它们的潜力。 本书从深度学习的视角探索自然语言处理。全书一共 8 章,建议读者像 读连载故事一样,从头开始顺序阅读。在遇到问题时,我们会先想办法解决 问题,然后再改进解决办法。按照这种流程,我们以深度学习为武器,解决 关于自然语言处理的各种问题。通过这次探险,希望读者能深入理解深度学 习中的重要技巧,并体会到它们的有趣之处。
Page
18
前言 xv 本书面向的读者 本书是《深度学习入门:基于 Python 的理论与实现》的续作,因此假 定读者已经学习了前作的内容。但是,作为回顾,本书第 1 章会复习一下神 经网络。因此,即便没有读过前作,只要具有神经网络和 Python 相关的知 识,就也可以阅读本书。 为了让读者深入理解深度学习,本书将继承前作的理念,以“创建”“运 行”为中心展开话题。不使用自己不理解的东西,只使用自己理解的东西, 我们将坚定这样的立场,去探索深度学习和自然语言处理的世界。 为了明确本书的读者对象,这里将本书的内容和特征列举如下。 • 不依赖外部库,从零开始实现深度学习的程序 • 作为《深度学习入门:基于 Python 的理论与实现》的续作,围绕 自然语言处理和时序数据处理中用到的深度学习技术进行讲解 • 提供可以运行的 Python 源代码,让读者能够方便地进行实验 • 尽可能地用简洁的语言和清晰的图示进行说明 • 虽然也会使用数学式,但更注重基于源代码进行解释 • 重视原理,比如“为什么这个方法更好?”“为什么这样有效?”“为 什么这样有问题?”等 另外,这里将从本书中可以学到的技术列举如下。 • 基于 Python 的文本处理 • 深度学习之前的“单词”表示方法 • 用于获取单词向量的 word2vec(CBOW 模型和 skip-gram 模型) • 加快大规模数据的训练速度的 Negative Sampling • 处理时序数据的 RNN、LSTM 和 GRU • 处理时序数据的误差反向传播法(Backpropagation Through Time) 图灵社区会员 Kensuke(cpy4ever@gmail.com) 专享 尊重版权
Page
19
前言xvi • 进行文本生成的神经网络 • 将一个时序数据转化为另一个时序数据的 seq2seq • 关注重要信息的 Attention 本书将以通俗易懂的方式详细解释这些技术,以便读者能在实现层面掌 握它们。在讲解这些技术时,本书不会单单列举事实,而是会像故事连载一 样展开叙述。 本书不面向的读者 明确本书不适合什么样的读者也很重要,为此,这里将本书不会涉及的 内容列举如下。 • 不介绍深度学习相关的最新研究进展 • 不讨论 Caffe、TensorFlow 和 Chainer 等深度学习框架的使用方法 • 不提供深度学习理论层面的详细解释 • 不涉及图像识别、语音识别和强化学习等主题(本书主要关注自然 语言处理) 如上所述,本书不涉及最新研究和理论细节。但是,读完本书之后,读 者应该有能力去研究那些最新的论文或者自然语言处理相关的最前沿技术。 运行环境 本书提供了 Python 3 的源代码,读者可以自己动手实际运行这些源代 码。通过边读代码边思考,并尝试自己想到的新思路,可以帮助自己巩固所 学知识。本书中用到的源代码可以从以下网址下载: 图灵社区会员 Kensuke(cpy4ever@gmail.com) 专享 尊重版权
Page
20
前言 xvii https://www.ituring.com.cn/book/2678A 本书的目标是从零开始实现深度学习。因此,我们的方针是尽量不使用 外部库,但是 NumPy 和 Matplotlib 这两个库例外。借助这两个库,我们 可以高效地实现深度学习。 NumPy 是用于数值计算的库。该库提供了许多用于处理高级数学算法 和数组(矩阵)的便捷方法。在本书的深度学习实现中,我们将使用这些便 捷方法进行高效的实现。 Matplotlib 是用于绘图的库。使用 Matplotlib,可以将实验结果可视 化,以直观地确认深度学习的学习过程。本书将使用这些库,来实现深度学 习的算法。 另外,本书中的大部分源代码可以在普通计算机上运行,而且不会花 费太多时间。但是,也是有一部分代码(特别是大型神经网络的学习)需要 花费大量时间。为了加快这部分耗时代码的处理速度,本书还提供了能在 GPU 上运行的代码(机制)。这是通过一个名为 CuPy 的库实现的(CuPy 会在第 1 章介绍)。如果你有一台装有 NVIDIA GPU 的机器,通过安装 CuPy,可以在 GPU 上高速处理本书的部分代码。 本书使用如下编程语言和库。 • Python 3 • NumPy • Matplotlib • CuPy(可选) A 请至本页面右侧的“随书下载”处下载本书源代码。另外,与本书内容相关的网址,均可在该页面 下方的“相关文章”(https://www.ituring.com.cn/article/510370)处查询。—编者注
Comments 0
Loading comments...
Reply to Comment
Edit Comment