Smishing (SMS phishing) has become a serious cybersecurity threat, especially for elderly and cyber-unaware individuals, causing financial loss and undermining user trust. Although prior work has focused on detecting smishing at the level of individual messages, real-world attackers often rely on multi-stage social engineering, gradually manipulating victims through extended conversations before attempting to steal sensitive information. Despite the existence of several datasets for single-message smishing detection, datasets capturing conversational smishing remain largely unavailable, limiting research on multi-turn attack detection. To address this gap, this paper presents a synthetically generated dataset of 3,201 labeled multi-round conversations designed to emulate realistic conversational smishing attacks. The dataset reflects diverse attacker strategies and victim responses across multiple stages of interaction. Using this dataset, we establish baseline performance by evaluating eight models, including traditional machine learning approaches (Logistic Regression, Random Forest, Linear SVM, and XGBoost) and transformer-based architectures (DistilBERT and Longformer), with both engineered conversational features and TF-IDF text representations. Experimental results show that TF-IDF-based models consistently outperform those using engineered features alone. The best-performing model, XGBoost with TF-IDF features, achieves 72.5% accuracy and a macro F1 score of 0.691, surpassing both transformer models. Our analysis suggests that transformer performance is limited primarily by input-length constraints and the relatively small size of the training data. Overall, the results highlight the value of lexical signals in conversational smishing detection and demonstrate the usefulness of the proposed dataset for advancing research on defenses against multi-turn social engineering attacks.