Explainable AI~(XAI) methods such as SHAP can help discover feature
attributions in black-box models. If the method reveals a significant
attribution from a ``protected feature'' (e.g., gender, race) on the model
output, the model is considered unfair. However, adversarial attacks can
subvert the detection of XAI methods. Previous approaches to constructing such
an adversarial model require access to underlying data distribution, which may
not be possible in many practical scenarios. We relax this constraint and
propose a novel family of attacks, called shuffling attacks, that are
data-agnostic. The proposed attack strategies can adapt any trained machine
learning model to fool Shapley value-based explanations. We prove that Shapley
values cannot detect shuffling attacks. However, algorithms that estimate
Shapley values, such as linear SHAP and SHAP, can detect these attacks with
varying degrees of effectiveness. We demonstrate the efficacy of the attack
strategies by comparing the performance of linear SHAP and SHAP using
real-world datasets.