The exponential increase in availability of scientific papers, institutional reports or research monographies in digital contexts (i.e. in digital repositories, archives or social scientific networks) has led to the advancement of manual, semi-automatic or automatic-based methods to analyze these texts in the digital environment. These techniques cover a heterogeneous range, from manual expert analysis using computer methods (usually by annotation systems), to the application of natural language processing algorithms or discourse analysis techniques, which are able to identify cognitive relationships between text elements, e.g. causal structures or contrasts argumentations. This advancement is more evident in humanities research contexts, where most of the knowledge generated are expressed in textual formats. However, how the use of these techniques is affecting the analysis conducted by researchers in humanities' texts? Is it possible to measure the quality of the textual analysis? What kind of cognitive structures are identified in the text using these methods?This paper presents an empirical study conducted with humanities researchers, with the goal of obtaining a better understanding about how texts in digital contexts are analyzed by these professionals using semiautomatic discourse analysis techniques. The paper also proposes a method, based on Thinking Aloud protocols, in order to design experiments and evaluate software cognitive aspects, such as digital textual analysis, with humanities professionals. Finally, the paper discusses about how empirical studies and the Thinking Aloud method constitute a solid basis to better understand the relationship between expert textual analysis in humanities and it conducting using software methods.