TY - JOUR
T1 - Mobile Crowdsensing Games in Vehicular Networks
AU - Xiao, Liang
AU - Chen, Tianhua
AU - Xie, Caixia
AU - Dai, Huaiyu
AU - Poor, H. Vincent
N1 - Funding Information:
Manuscript received March 7, 2016; revised July 17, 2016 and September 21, 2016; accepted December 27, 2016. Date of publication January 4, 2017; date of current version February 12, 2018. This work was supported in part by the National Natural Science Foundation of China under Grant 61671396, Grant 61271242, and Grant 91638204; in part by the U.S. National Science Foundation under Grant ECCS-1307949, Grant EARS-1444009, and Grant ECCS-1343210; in part by the U.S. Army Research Office under Grant W911NF-16-1-0448; and in part by the CCF-Venustech Hongyan Research Initiative (2016-010). The review of this paper was coordinated by Yuguang Fang.
Publisher Copyright:
© 1967-2012 IEEE.
PY - 2018/2
Y1 - 2018/2
N2 - Vehicular crowdsensing takes advantage of the mobility of vehicles to provide location-based services in large-scale areas. In this paper, mobile crowdsensing (MCS) in vehicular networks is analyzed and the interactions between a crowdsensing server and vehicles equipped with sensors in an area of interest is formulated as a vehicular crowdsensing game. Each participating vehicle chooses its sensing strategy based on the sensing cost, radio channel state, and the expected payment. The MCS server evaluates the accuracy of each sensing report and pays the vehicle accordingly. The Nash equilibrium of the static vehicular crowdsensing game is derived for both accumulative sensing tasks and best-quality sensing tasks, showing the tradeoff between the sensing accuracy and the overall payment by the MCS server. A Q-learning-based MCS payment strategy and sensing strategy is proposed for the dynamic vehicular crowdsensing game, and a postdecision state learning technique is applied to exploit the known radio channel model to accelerate the learning speed of each vehicle. Simulations based on a Markov-chain channel model are performed to verify the efficiency of the proposed MCS system, showing that it outperforms the benchmark MCS system in terms of the average utility, the sensing accuracy, and the energy consumption of the vehicles.
AB - Vehicular crowdsensing takes advantage of the mobility of vehicles to provide location-based services in large-scale areas. In this paper, mobile crowdsensing (MCS) in vehicular networks is analyzed and the interactions between a crowdsensing server and vehicles equipped with sensors in an area of interest is formulated as a vehicular crowdsensing game. Each participating vehicle chooses its sensing strategy based on the sensing cost, radio channel state, and the expected payment. The MCS server evaluates the accuracy of each sensing report and pays the vehicle accordingly. The Nash equilibrium of the static vehicular crowdsensing game is derived for both accumulative sensing tasks and best-quality sensing tasks, showing the tradeoff between the sensing accuracy and the overall payment by the MCS server. A Q-learning-based MCS payment strategy and sensing strategy is proposed for the dynamic vehicular crowdsensing game, and a postdecision state learning technique is applied to exploit the known radio channel model to accelerate the learning speed of each vehicle. Simulations based on a Markov-chain channel model are performed to verify the efficiency of the proposed MCS system, showing that it outperforms the benchmark MCS system in terms of the average utility, the sensing accuracy, and the energy consumption of the vehicles.
KW - Game theory
KW - mobile crowdsensing (MCS)
KW - reinforcement learning
KW - vehicular networks
UR - http://www.scopus.com/inward/record.url?scp=85042517458&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85042517458&partnerID=8YFLogxK
U2 - 10.1109/TVT.2016.2647624
DO - 10.1109/TVT.2016.2647624
M3 - Article
AN - SCOPUS:85042517458
SN - 0018-9545
VL - 67
SP - 1535
EP - 1545
JO - IEEE Transactions on Vehicular Technology
JF - IEEE Transactions on Vehicular Technology
IS - 2
ER -