This study was conducted to examine the comments provided by ChatGPT 3.5 and doctoral students on doctoral dissertation proposals. Feedback receivers’ behavioral engagement with these two feedback types was also examined. The participants of this study, selected based on convenience sampling, were 28 Teaching English as a Foreign Language PhD students from three provinces who wrote their dissertation proposals in English. The first version and revised versions of their proposals and the provided comments were analyzed to identify the feedback types and the extent they applied the comments. Furthermore, stimulated recall interviews were used to identify the reasons why they did not apply some of the participants. Findings showed that both ChatGPT 3.5 and doctoral students were successful in providing both content-related and form-related comments. These two feedback sources also provided significant numbers of elaborated and justified feedback on dissertation proposals. Feedback receivers applied most of the comments provided by these feedback sources, and the specificity levels of comments affected the incorporation rate. The findings of the thematic analysis of the stimulated recall interview data revealed that the participants did not apply the comments because they were too broad, inaccurate, difficult to apply, and difficult to understand.