抽象的

Privacy Preserving Data Analysis Technique

G.Monika, R.Saraswathi, K.Sujitha, Mrs.M.Varalakshmi

In many cases, competing parties who have private data may collaboratively conduct Fraud detection tasks to learn beneficial data models or analysis results. For example, different credit card companies may try to build better models for credit card fraud detection through Fraud detection tasks. Similarly, competing companies in the same industry may try to combine their sales data to build models that may predict the future sales. In many of these cases, the competing parties have different incentives. Although certain fraud detection techniques guarantee that nothing other than the final analysis result is revealed, it is impossible to verify whether or not participating parties are truthful about their private input data. In other words, unless proper incentives are set, even current Fraud detection techniques cannot prevent participating parties from modifying their private inputs. This raises the question of how to design incentive compatible Fraud detection techniques that motivate participating parties to provide truthful input data. In this paper, we first develop key theorems, then base on these theorem, we analyze what types of Fraud detection tasks could be conducted in a way that telling the truth is the best choice for any participating party.

免责声明: 此摘要通过人工智能工具翻译,尚未经过审核或验证

索引于

学术钥匙
研究圣经
引用因子
宇宙IF
参考搜索
哈姆达大学
世界科学期刊目录
学者指导
国际创新期刊影响因子(IIJIF)
国际组织研究所 (I2OR)
宇宙

查看更多