Several domains of interest including those in social network analysis, biology, vision, NLP and IE need to represent the underlying relational structure as well as model uncertainty. Statistical relational models such as Markov logic achieve this by combining the power of relational representations (e.g. first-order logic) with statistical models (e.g. Markov networks). To efficiently perform reasoning in these models, lifted inference exploits the underlying symmetry by grouping together objects (or states) which behave similar to each other, and hence, have the same associated probabilities. In this talk, starting with the background on Markov logic, we will look at the novel idea of lifting using contextual symmetries i.e. symmetries available only under specific assignments to a subset of variables. We will formally define what contextual symmetries are and then proceed to describe their use in lifting a general purpose MCMC sampler for graphical models. We will present our evaluation on two representative domains followed by directions for future work. The work presented here was published at IJCAI 2016 and awarded best paper at StaR AI-2016.