Next Article in Journal
Strategic IT Alignment and Organizational Agility in Nonprofits during Crisis
Previous Article in Journal
Ambidextrous Innovation in Project Management: A Systematic Literature Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Can ChatGPT Be a Certified Accountant? Assessing the Responses of ChatGPT for the Professional Access Exam in Portugal

by
Fabio Albuquerque
1,2,* and
Paula Gomes dos Santos
1,3
1
Department of Accounting and Auditing, Instituto Politécnico de Lisboa, 1069-035 Lisboa, Portugal
2
Research Center on Accounting and Taxation (CICF), Instituto Politécnico do Cávado and Ave, 4750-810 Barcelos, Portugal
3
Center for Research in Organizations Markets and Industrial Management (COMEGI), Universidade Lusíada, 1349-001 Lisboa, Portugal
*
Author to whom correspondence should be addressed.
Adm. Sci. 2024, 14(7), 152; https://doi.org/10.3390/admsci14070152
Submission received: 1 June 2024 / Revised: 3 July 2024 / Accepted: 12 July 2024 / Published: 16 July 2024

Abstract

:
Purpose: From an exploratory perspective, this paper aims to assess how well ChatGPT scores in an accounting proficiency exam in Portugal, as well as its overall understanding of the issues, purpose and context underlying the questions under assessment. Design/methodology/approach: A quasi-experimental method is used in this study. The questions from an exam by the Portuguese Order of Chartered Accountants (OCC, in the Portuguese acronym) served as input queries, while the responses (outputs) from ChatGPT were compared with those from the OCC. Findings: The findings indicate that ChatGPT’s responses were able to deduce the primary issue underlying the matters assessed, although some responses were inaccurate or imprecise. Also, the tool did not have the same score in all matters, being less accurate in those requiring more professional judgment. The findings also show that the ChatGPT did not pass the exam, although it was close to doing so. Originality: To the best of the authors’ knowledge, there is little research on ChatGPT accuracy in accounting proficiency exams, this being the first such study in Portugal. Practical implications: The findings from this research can be useful to accounting professionals to understand how ChatGPT may be used for practitioners, stressing that it could assist them and improve efficiency, but cannot, at least for now, replace the human professional. It also highlights the potential use of ChatGPT as an additional resource in the classroom, encouraging students to engage in critical thinking and facilitating open discussion with the guidance of teachers. Consequently, it can also prove beneficial for academic purposes, aiding in the learning process.

1. Introduction

The recent and fast advancements in artificial intelligence (AI) technology, particularly large language models (LLMs), such as ChatGPT, have ignited discussions concerning its utility and potential economic impact (Surapaneni et al. 2024). Built on the Generative Pre-trained Transformer (GPT) architecture, ChatGPT has the potential to empower agents to learn and execute tasks with performance surpassing human capabilities when acting autonomously (Retzlaff et al. 2024). Its integration has revolutionised various industries, including auditing and accounting, enhancing efficiency (Boritz and Stratopoulos 2023; Cheng et al. 2024; Fedyk et al. 2022; Kayser and Telukdarie 2024; Rana et al. 2023; Vasarhelyi et al. 2023; Zhao and Wang 2024).
Research has increasingly assessed the implications of AI technologies within the accounting profession and education (Abeysekera 2024; Cao and Zhai 2023; De Villiers et al. 2023; Shchyrba et al. 2024). Considering ChatGPT’s notable ability to achieve high scores on academic and professional exams (Abeysekera 2024; Cao and Zhai 2023), there is growing concern over the integrity of assessments and the need to reassess what constitutes assessable knowledge (Abeysekera 2024).
In Portugal, under Regulation 363/2024 of 2024, certified accountants must be registered in the Portuguese Order of Chartered Accountants (OCC, in the Portuguese acronym), and must be approved by an exam as a requirement to access the profession. To assess how well ChatGPT scores in an accounting proficiency exam in Portugal, from an exploratory perspective, this paper aims to assess whether the responses of ChatGPT are accurate regarding the questions proposed by the OCC in its national exam. The paper also assesses its overall understanding of the issues, purpose and context underlying the questions under assessment. A quasi-experimental method is used in this study. The questions from an OCC exam served as input queries, while the responses (outputs) from ChatGPT were compared with those from the OCC.
To the best of the authors’ knowledge, there is little research on ChatGPT accuracy in accounting proficiency exams, this being the first such study in Portugal. According to the findings, the ChatGPT did not pass the exam, although it was close to doing so. Moreover, the responses provided indicated that ChatGPT was able to deduce the primary issue underlying the matters assessed, although some responses were inaccurate or imprecise. Also, the tool did not produce the same score in all matters, being less accurate in those areas requiring more professional judgment.
The findings from this research can be useful to accounting professionals to understand how ChatGPT may be used in the profession, stressing that it could assist them and improve efficiency, but that it cannot, at least for now, replace the human professional. Also, this paper can serve to highlight the potential use of ChatGPT as an additional resource in the classroom, encouraging students to engage in critical thinking and facilitating open discussion with the guidance of teachers. Consequently, it can also prove beneficial for academic purposes, aiding in the learning process.
Finally, as highlighted by Abeysekera (2024), ChatGPT can contribute to the United Nations Sustainable Development Goals by enabling literacy skills. Therefore, this study can also contribute to alerting professional bodies which certify accountants’ competence and the accounting academy about the limitations of current assessments and the need to rethink course programs and teaching models. It sheds light on the need to understand the advantages and limitations of using artificial intelligence software, such as ChatGPT, in accounting education and professional practice to avoid failing to meet stakeholders’ needs, such as those of businesses, government, and society.
This paper is structured in four sections, in addition to this Introduction. The next section provides a literature review, while the third identifies the research questions, materials and methods underlying the exploratory analysis proposed for this paper, and is followed by the results section. Finally, the last section presents the conclusions, as well as identifying limitations and providing suggestions for future research avenues.

2. Literature Review

In accounting, the literature has addressed the validity and usefulness of the integration of AI technologies both in academia and in the profession (Al Ghatrifi et al. 2023; Boritz and Stratopoulos 2023), as well as their utility as novel research tools (Dong et al. 2023). For instance, they have been employed in accounting research to evaluate disclosure sentiment and readability, analyse word counts, examine keyword frequency, and explore disclosure topics (Bochkay et al. 2023). Notwithstanding this, despite the opportunities presented by AI technologies, they also introduce new threats, both to professionals and academia (Abeysekera 2024; Ballantine et al. 2024), as they are perceived as a potential substitute for human jobs (Fedyk et al. 2022; Papakonstantinidis et al. 2024; Zadorozhnyi et al. 2023), and raise ethical concerns regarding their use (Abeysekera 2024; Alshurafat et al. 2023; Cao and Zhai 2023; Scarfe et al. 2024).
For instance, Alshurafat et al. (2023) revealed that accounting students explored ChatGPT to cheat, highlighting the broader implications of AI misuse within the educational system. Furthermore, Scarfe et al. (2024) stressed that the integrity of the assessments is at stake if cheating using AI is not detected. The authors submitted assignments developed by AI in five undergraduate modules of a bachelor’s degree in psychology at a United Kingdom university across all years of the course, and they were not detected in 94% of cases, and also received better grades than those for student submissions.
Thus, while ChatGPT offers indisputable advantages, it is essential to recognise and address potential limitations to ensure its ethical and reliable application (Cohen et al. 2023; Lund and Wang 2023; Vasarhelyi et al. 2023; Zhao and Wang 2024). As it synthesises existing information, not always accurately, the literature emphasises that ChatGPT should be viewed as an aid, rather than a replacement for human intelligence (Abeysekera 2024; Cohen et al. 2023; Tsai et al. 2024), proving beneficial for those with expertise but being potentially detrimental otherwise.
Moreover, although it may provide comprehensive answers, especially to theory-based questions with readily available answers (Cohen et al. 2023), it may fail when faced with complex technical queries requiring expert knowledge (Abeysekera 2024; Cohen et al. 2023). Therefore, while these tools can assist in problem-solving, they require a certain level of expertise in formulating questions and analysing responses (Abeysekera 2024; Stott and Stott 2023). Thus, it is imperative to acknowledge that ChatGPT may offer comprehensive and persuasive answers without a deep comprehension of the problem (Abeysekera 2024; Cohen et al. 2023; Rudolph et al. 2023), which can be crucial to ensure that responses correctly address the problem (Stott and Stott 2023).
These concerns have been particularly highlighted in the medical field, which has been increasingly integrating AI tools (Jarry Trujillo et al. 2024). Although ChatGPT can contribute to this area (Jarry Trujillo et al. 2024; Meral et al. 2024; Turan et al. 2024), it may have limitations in evaluating patients with more complex health conditions (Turan et al. 2024).
Furthermore, research has also investigated the integration of AI into medical education, namely, by evaluating the proficiency of ChatGPT across various medical examinations of several specialties (Fiedler et al. 2024; Isleem et al. 2024; Mackey et al. 2024; Shojaee-Mend et al. 2024; Tran et al. 2024; Tsai et al. 2024). Although in some specialties, its overall efficacy is positive (Fiedler et al. 2024; Isleem et al. 2024; Mackey et al. 2024; Tsai et al. 2024), it has lower performance in those with significant interdisciplinary elements, for example, where complex clinical scenarios are involved (Mackey et al. 2024).
For instance, Shojaee-Mend et al. (2024) identified significant deficiencies in reasoning, discerning priorities and integrating knowledge when evaluating neurophysiology questions with ChatGPT. Similarly, Tran et al. (2024) observed that it performed worse in higher-level questions necessitating complex clinical decision-making, even though it provided detailed rationales for its answers. Isleem et al. (2024) emphasised that ChatGPT’s reasoning should be meticulously analysed for accuracy and clinical validity. Notwithstanding this, Tsai et al. (2024) noted that ChatGPT could have passed the Taiwan Urology Board Examination (TUBE) based on accuracy alone, but it ultimately failed due to penalties impacting the final score.
The high scores achieved by ChatGPT in academic and professional examinations (Abeysekera 2024; Cao and Zhai 2023) have led to the need to reconsider what qualifies as assessable knowledge, to preserve the assessments’ integrity (Abeysekera 2024). Wood et al. (2023), for instance, compared the performance of ChatGPT and students on 28,085 questions from accounting assessments and textbook test banks, using data from 14 countries and 186 institutions. Students outperformed ChatGPT, achieving an average score of 76.7% on assessments, compared to 47.5% for ChatGPT if no partial credit was awarded, and 56.5% if partial credit was considered. Nevertheless, ChatGPT performed better than the student average on 15.8% of assessments when partial credit was included.
ChatGPT performance in professional exams has also been assessed in accounting, as has happened with other areas of knowledge, such as medicine (e.g., Gilson et al. 2023; Subramani et al. 2023; Tsai et al. 2024), law (e.g., Choi et al. 2023) and computer science (e.g., Bordt and Luxburg 2023). For instance, ChatGPT 3.5 failed to answer questions in the Certified Public Accountant exam, which qualifies candidates for the accounting profession in the United States (Accounting Today 2023a), but it succeeded in version 4.0 (Accounting Today 2023b). Also in this area, ChatGPT passed the Accounting Proficiency Exam in Brazil, scoring higher on matters demanding theoretical knowledge of several norms, regulations, resolutions and laws (i.e., which require higher memory), but having a lower performance whenever professional judgment was required (Freitas et al. 2024).
Similarly, this research intends to assess how ChatGPT performs regarding the accounting exam for qualifying candidates to have access to the accounting profession in Portugal, which means being a certified accountant in this country. Moreover, it assesses related characteristics of this tool while doing that.
The next section presents the research questions, as well as the materials and methods used in this study.

3. Research Questions, Materials and Methods

This study uses an exploratory perspective of analysis, aiming to assess whether the responses of ChatGPT are accurate regarding the questions proposed by the OCC in its latest national exam of 24 February 2024 as a requirement to access the profession of certified accountant in Portugal under the OCC Regulation 363/2024 of 2024. To achieve its purpose, this study uses a quasi-experimental method. Specifically, it uses a set of questions (input) and answers (output) regarding accounting (including taxation) matters, as proposed in the OCC exam, as the basis for comparative analysis (control element) with the answers (output) provided by the ChatGPT to the same questions (input) inserted into the tool. Version 3.5 of ChatGPT is primarily used since it is the current version freely available to everyone. Notwithstanding this, the overall findings are also compared with the latest freely (despite some restrictions) available version of this tool, namely, ChatGPT 4.0.
Therefore, the excerpts (questions) gathered from that material were selected as the research object, using the proposed answer key as the reference to correct answers that are expected from the ChatGPT. The exam covers diverse accounting topics through 40 questions, including financial and management accounting and taxation, as well as statutory and deontological matters.
The first part of the exam is comprised of 20 general questions, which implicitly integrate those topics. In turn, the second part of the exam integrates another 20 specific questions, which integrate those topics, as shown in Table 1.
To be approved, along with other criteria, the candidates should have a grade equal to or higher than 9.5 points, which can be rounded to 10. To achieve this, each correct answer corresponds to 0.5 points. On the other hand, it is worth considering that each wrong answer implies a penalty of 0.15 points, with no answer corresponding to a null score. If a given answer is considered invalid by the OCC, 0.5 points are attributed to all candidates.
Five research questions (RQs) emerged as relevant for this analysis, based on the objective proposed for this research:
  • RQ1. Can ChatGPT properly identify the main issues underlying the questions?
  • RQ2. Can ChatGPT provide a useful analysis of the issues underlying the questions?
  • RQ3. Can ChatGPT provide an objective answer to the questions proposed?
  • RQ4. Are the answers provided by ChatGPT accurate, considering those provided by the OCC as a reference for their accuracy?
  • RQ5. Considering the issues underlying RQ1 to RQ4, how can the findings from the ChatGPT in its latest version (4.0) be compared to those from ChatGPT 3.5?
As such, several aspects of the answers provided by ChatGPT, by comparison with those provided by the OCC, are considered, which can be summarised as follows:
  • RQ1: the global efficacy regarding the understanding of the issue (the focus on it) underlying the questions;
  • RQ2: the helpful analysis of the topics underlying the questions;
  • RQ3: the understanding of the objective underlying the questions, i.e., to select one of the proposed options;
  • RQ4: the global accuracy (reliability) of the answers;
  • RQ5: the comparison between the latest and the previous version of ChatGPT across those characteristics.
The analysis of the RQ is based on a qualitative analysis of the content provided by the ChatGPT in comparison to those provided by the IASB and the authors’ expertise on the matters under assessment.
The questions or cases (inputs) from the OCC exam were individually inserted into the ChatGPT on May 1st and June 26th, with no further information besides the fact that the Portuguese laws and accounting framework are applicable. This was intended to apply the same conditions to both ChatGPT and candidates.
The next section provides the findings.

4. Results

This section presents the findings of each of the four RQs detailed in the previous section.
Regarding RQ1, the findings indicate an overall understanding of the issues underlying the proposed questions, even when specific national legislation is addressed, which is the case for questions related to taxation and statutory issues. It also relates the situation (problem-case scenarios) to the legal framework, even when it is not explicitly provided within the question. Moreover, it saves the information (assumptions) from the previous question, linking the context with the questions, and then applies the relevant sources and analysis to solve them. The excerpt below illustrates this (as translated by the authors and using the Portuguese acronym).
Initial context (input) inserted by the authors:
“I ask you to consider, for the following questions, the Portuguese law as the framework applicable, except if provided differently, is that possible?”
ChatGPT’s answer:
“Of course, I can focus the answers on the framework of the legislation applicable in Portugal. If you need further clarification or have other specific questions, feel free to ask!”
Initial context (input) extracted from the OCC’s exam and inserted by the authors:
“FOUNDEX SGPS S.A., owned by the Alves family, is the parent company of a business group (FOUNDEX Group) founded in the 1980s by Miguel Alves, a businessman from the Setúbal region, where he was born and has always lived. FOUNDEX holds stakes in companies from different sectors of activity, both in Portugal and abroad, namely, Spain and Angola. Miguel Alves’ strategy has always been based on giving participation in the companies’ share capital to the main managers, reserving most of the share capital for FOUNDEX.”
(..)
Second question, as inserted by the authors:
“Miguel Alves has always had great social concerns and founded “Ajuda de Mão”, a non-profit entity, with IPSS status. In January 2024, “Ajuda de Mão” sold land that had been donated to it. As a result of this operation, the entity recorded a tax gain.
This gain:
(a)
It is not subject to IRC.
(b)
It is subject to IRC, although it benefits from exemption.
(c)
It is subject to IRC in general terms.
(d)
It is subject to IRC, and the entity can benefit from the tax regime of reinvestment.”
Excerpt of ChatGPT’s answer for the second question:
“In the context of Portuguese tax legislation, Private Social Solidarity Institutions (IPSS), such as the “Ajuda de Mão” mentioned, enjoy a specific tax regime that includes several exemptions, including Corporate Income Tax (IRC), under the Tax Benefits Statute and the IRC Code. Specifically, IPSS are generally exempt from IRC in activities that fall within their statutory purposes and are not for profit, which covers most of their operations. However, the sale of land that has been donated may not directly fit into IPSS’s typical activity or statutory purposes, which could raise questions about the applicability of the exemption. Even so, article 10 of the IRC Code stipulates that entities that enjoy exemption from IRC are subject to taxation only if they carry out operations that are not directly related to their statutory objectives or that constitute a commercial activity. However, when an IPSS sells a donated asset, such as land, to raise funds for its activities, that transaction may still be considered part of its exempt non-commercial activities, depending on the specific purpose of the gain (…).”
Concerning RQ2, the findings indicate that a helpful analysis is provided by this tool to its users, either by specifying the relevant framework, as above indicated, or by providing the applicable concepts and formulas to solve the cases, which are globally well organised and structured. The example below illustrates this.
Thirty-first question, as inserted by the authors:
A given company has a normal production capacity of 40,000 units of the RQL product per year. Last year, the company produced 32,000 RQL units, with the final finished production inventory being 2000 RQL units. The unit sales price is EUR 28.00. The expenses for the year were as follows: Raw materials EUR 136,000; Fixed Conversion Expenses EUR 162,000; Variable Conversion Expenses EUR 26,800; Fixed Administrative Expenses EUR 18,000; Variable Distribution Expenses EUR 5200.
Given the above, the Industrial Cost of Products Sold (CIPV), assuming the adoption of the Rational Costing System, is:
(a)
EUR 241,725.
(b)
EUR 274,125.
(c)
EUR 288,525.
(d)
None of the previous is correct.
Excerpt of ChatGPT’s answer for the 31st question:
“To calculate the Industrial Cost of Goods Sold (CIPV) under the rational costing system, we need to consider that this costing system distributes fixed costs based on normal production capacity, not actual production. Additionally, variable costs are assigned based on actual production. In this system, fixed costs are apportioned according to normal capacity, regardless of the actual production volume.
Data:
  • Normal production capacity: 40,000 units/year
  • Actual production: 32,000 units
  • Ending inventory: 2000 units
  • Sales: 32,000 − 2000 = 30,000 units
  • Unit sales price: EUR 28.00
Annual Expenses:
  • Raw materials: EUR 136,000
  • Fixed conversion expenses: EUR 162,000
  • Variable conversion expenses: EUR 26,800
  • Fixed administrative expenses: EUR 18,000 (not included in CIPV)
  • Variable distribution expenses: EUR 5200 (not included in CIPV)
Calculations:
1.
Cost of raw materials per unit:
  • 136,000 EUR/32,000 units = 4.25 EUR/unit
2.
Variable conversion cost per unit:
  • 26,800 EUR/32,000 units = 0.8375 EUR/unit
3.
Apportionment of fixed conversion expenses based on normal capacity:
  • 162,000 EUR/40,000 units = 4.05 EUR/unit
4.
Total cost per unit produced:
  • Raw materials + variable conversion + fixed conversion = EUR 4.25 + EUR 0.8375 + EUR 4.05 = 9.1375 EUR/unit
5.
Industrial cost of products sold (CIPV):
  • CIPV = Total cost per unit produced × units sold
  • CIPV = 9.1375 EUR/unit × 30,000 units = EUR 274.125 (…)”
As for RQ3, ChatGPT also provides a per option detailed analysis of the options provided. The messages seem to indicate which is the correct one among them from a critical and comparative perspective, with some answers that also propose either none of the alternatives or more than one as the right choice. In some cases, further information is provided to support users’ decision-making based on the understanding of circumstances that, according to this tool, should be taken into account whenever this is not explicitly provided by its analysis from the input. The following two examples illustrate those characteristics from the ChatGPT’s answers.
Fourteenth question, as inserted by the authors:
“Another friend of José Silva is Carlos Martins, a tax resident in Switzerland, who is considering returning to Portugal after 10 years of stay in that country. He will set up a company governed by Portuguese law and appoint himself as its manager. If you stipulate a salary of EUR 2000.00 per month, for holding the position of management, at the IRS level:
(a)
You can benefit from a tax exemption on half of the aforementioned amount.
(b)
You can benefit from being exempt from tax on half of the aforementioned amount.
(c)
You can benefit from a progressive exemption on half of the aforementioned amount.
(d)
You can benefit from a reduction in the IRS rate on category A income.”
Excerpt of ChatGPT’s answer for the 14th question:
“Carlos Martins, when considering returning to Portugal after a decade in Switzerland and establishing a new company in Portugal with a monthly salary for his management role, may be eligible for certain tax benefits designed to attract non-habitual residents (RNH) to Portugal. The Non-Habitual Resident (RNH) regime offers favourable tax conditions for individuals who become tax residents in Portugal but who have not been residents in the last five years”.
Options Analysis:
Tax Regime for Non-Habitual Residents:
The RNH provides a flat IRS rate of 20% on income from work (category A) and from business and professions (category B), as long as they come from activities with high added value of a scientific, artistic or technical nature. Furthermore, income from a foreign source (depending on the type) may be exempt from taxation in Portugal, as long as it is also taxable in the country of origin, under a double taxation elimination agreement or can be taxed in that country.
Option a) You can benefit from a tax exemption on half of the aforementioned amount. There is no such exemption in the RNH. The rates applied are 20% on income from dependent and independent work that falls within activities with high added value.
Option b) You can benefit from being exempt from tax on half of the amount mentioned. This option also does not reflect the reality of the RNH regime or any other common tax regime in Portugal.
Option c) You can benefit from a progressive exemption on half of the aforementioned amount. This description does not correspond to the characteristics of the RNH. In the RNH, a special rate of 20% applies to certain income or exemption for foreign income, but there is no reference to “progressive exemption”.
Option d) You can benefit from an IRS rate reduction on category A income. This option is not correct under the terms of the NHR for category A income, as there is no “rate reduction” but rather a flat rate of 20% on income from qualified dependent and independent work.
Conclusion:
None of the options presented correctly reflects the conditions of the RNH regime or other tax benefits applicable to individuals in Carlos Martins’ situation, as described. If he qualifies for the RNH, the closest feature would be a flat rate taxation of 20% on income from high value-added activities or exemption for foreign income, depending on its origin and nature, which is not explicitly mentioned in the options given. It is possible that the wording of the question and options may have been inappropriate to correctly describe the tax benefits available under the RNH or other regimes.”
Thirty-sixth question, as inserted by the authors:
“Professional societies of certified accountants must:
(a)
Appoint a technical director who will be responsible for ensuring compliance with statutory and ethical duties.
(b)
Be made up exclusively of certified accountants.
(c)
Provide exclusively the services defined in Article 10, paragraph 1, of the Statute of Order.
(d)
Be limited liability companies.”
Excerpt of ChatGPT’s answer for the 36th question:
In the regulation of professional societies of certified accountants in Portugal, several aspects are defined by legislation and the statute of the Order of Certified Accountants (OCC). These regulations address the structure, responsibilities and activities permitted for these companies.
Options Analysis:
Option a) Appoint a technical director who will be responsible for ensuring compliance with statutory and ethical duties. This option is correct according to the Statute of the Order of Certified Accountants. The technical director (or official accountant, as he was previously called) is responsible for ensuring compliance with the profession’s statutory and ethical duties, being a central figure in guaranteeing the integrity and quality of the services provided by the company.
Option b) Be constituted exclusively by certified accountants. This option is also correct. Professional societies of certified accountants must be constituted exclusively by members who are certified accountants registered with the Order. This ensures that all partners have the necessary qualifications and certification to provide accounting services following the required professional standards.
Option c) Provide exclusively the services defined in Article 10, no. 1, of the Order’s Statute. This option is correct. Professional societies of certified accountants are limited to providing services that are within the professional activities defined in the OCC statute, which includes accounting, auditing and tax consultancy services, among other related services.
Option d) Be limited liability companies. This option is incorrect. Professional companies of certified accountants can take different legal forms and are not limited to being limited liability companies. They may, for example, be limited liability companies, or public limited companies, among other forms permitted by Portuguese legislation.
Conclusion:
Options a), b) and c) are correct. Each of these options adequately reflects a requirement or characteristic of professional companies of certified accountants following the statute of the Order of Certified Accountants and the legislation applicable in Portugal. Option d) is the only one that is not correct since certified accounting firms are not restricted to being limited liability companies.
Table 2 provides the figures for the analysis of RQ4, related to the accuracy of the answers provided by the ChatGPT.
The findings from Table 2 indicate that ChatGPT provided 18 out of 39 correct answers, with a maximum of 22 if those four with alternative options are considered. This means that a maximum of 9.0 points for the best scenario, through the rounding of 9.15 [=22∗0.5 (correct answers) − 17∗0.15 (wrong answers) + 0.5 (invalid answer by OCC, for which the points are indistinctly attributed to all candidates)], will be obtained by a given candidate, which will not enable him/her to achieve the role of being a certified accountant in Portugal. Interestingly, all questions related to topics from management accounting were correctly indicated by this tool.
Finally, regarding RQ5, which intends to compare the findings from the latest version of the ChatGPT (4.0) with the previous ones, performed using version 3.5 of this tool, the latest version did not diverge in its level of efficacy concerning the topics assessed, also providing helpful analysis to users. However, the answers were commonly more objective regarding the main issue underlying the questions and focused on explaining the alternative indicated as the correct answer. Other important distinctive characteristics of the latest version of ChatGPT include the absence of several options as likely (alternative) right answers. Moreover, the timing for providing them was usually shorter as well. Therefore, there were several cases in which the answers were less wordy than those provided by version 3.5, with no particular pattern that might explain this interesting characteristic. Furthermore, the ChatGPT 4.0 seems to be, interestingly, less accurate than its previous version, as the following figures summarise:
  • There were 23 (57.5%) divergent answers between those versions;
  • Nonetheless, only seven (17.5%) answers modified were then correct; on the other hand, in exactly seven further cases the answers were incorrectly changed; as a consequence, the latest version did not improve the global accuracy of the exam scoring (18 out of 39 correct answers); however, considering the absence of multiple alternatives as right answers from the results provided by the ChatGPT in its version 4, they can even be considered as less accurate in a certain sense;
  • Interestingly, most of the cases that were incorrectly modified by ChatGPT 4.0 relate to the management accounting field, which breaks the pattern found for the results from the previous version of this tool.
The last section provides the conclusions from this study, including its limitations and suggestions for future research.

5. Conclusions

Using a quasi-experimental methodology and an exploratory viewpoint, this study aimed to evaluate ChatGPT’s answers as a supporting tool for responding to the OCC exam, which qualifies candidates for the role of a certified accountant in Portugal. The global efficacy regarding the understanding of the issue, the helpful analysis of the topics, and the understanding of the objective underlying the questions were assessed as research questions, as well as the global accuracy (reliability) of the answers provided by the ChatGPT and the comparison between the latest and the previous version of this tool. The inputs (questions) in ChatGPT were examples from the OCC exam, and the outputs (answers) were evaluated by comparing them to the proposed key.
The results show that the ChatGPT responses were able to deduce the primary issue underlying the messages from the topics proposed. ChatGPT also offered readers a critical perspective of analysis and explanations of the principles underpinning the messages. Nevertheless, some responses were deemed to be inaccurate or imprecise worldwide. This reinforces the conclusion of the literature that states that users who possess the requisite knowledge must validate the platform’s responses given that it may provide convincing answers without fully comprehending the question or by making incorrect assumptions about it (Abeysekera 2024; Cohen et al. 2023; Lund and Wang 2023; Rudolph et al. 2023; Stott and Stott 2023; Vasarhelyi et al. 2023; Zhao and Wang 2024).
The inaccuracies, however, are more evident in matters requiring more professional judgment and technical complexity, such as statutory and deontological questions, taxation questions, and financial accounting issues, in line with the literature (Abeysekera 2024; Cohen et al. 2023; Freitas et al. 2024). Conversely, since management accounting is less standardised than financial accounting, this is the subject where the tool scores higher. Therefore, this suggests that the accounting harmonisation process led by the International Accounting Standards Board (IASB) can increase the subjectivity level since the International Financial Reporting Standards (IFRS) are principle-based standards. In light of this, similar to other countries that have national IFRS-based standards which require a high degree of professional judgment (Coram and Wang 2021; Gierusz et al. 2022; Hellmann et al. 2021; Hellmann and Patel 2021; Maradona et al. 2024), the financial accounting matters in the OCC exam can reflect the convergence of Portuguese standards to those, falling under the same category as taxation, statutes and deontology, as well as “interpretations challenges” by ChatGPT.
Finally, the use of the latest freely (despite some restrictions) available version of ChatGPT, namely, ChatGPT 4.0, did not prove to be more accurate than version 3.5 used in this paper, conversely to what was evidenced by previous researchers (e.g., Abeysekera 2024; Freitas et al. 2024). Moreover, it did not evidence any particular higher level of accuracy concerning the matters under assessment. On the other hand, it was more “sure” regarding the answers provided than its 3.5 version, i.e., it did not indicate different alternative answers in any case, although the ones provided were not necessarily correct or adequate.
In a different context, those results might also suggest that ChatGPT’s outputs can be helpful for professionals, such as managers and accountants, since they provide an overview of issues and concerns being assessed in a particular situation, along with pertinent sources that need to be considered when making decisions. Students’ critical sense can be developed through group projects and discussions, with assistance from professors, in a classroom setting, for example. As a result, it is also beneficial for academics. On the other hand, ChatGPT also increases the academic challenges to prevent students cheating from the use of the tools related to online pedagogical and assessment methods, which was particularly strengthened after the COVID-19 pandemic.
Practitioners and academics who have a basic understanding of accounting and financial reporting can benefit from this research. It can also be helpful to local and international standard-setting bodies by offering insight into how to enhance the utility of similar documents, including explanatory examples, and an understanding of how stakeholders can read or understand those documents, as well as adding to the body of knowledge by presenting a range of perspectives on the diversity of analyses that could be applied in future accounting studies.
The analysis underlying this study will be accessible to future researchers from similar perspectives. Researchers’ conclusions can be compared with those obtained by practitioners and students (future candidates) with reasonable knowledge of the subject matter and language related to reporting and accounting standards. A possible judgement bias may be examined in this research field, along with its implications and appropriateness regarding the interpreters’ interpretations and applications. Further individual experimental studies can complement this research goal, assisting standard-setting bodies and legislators to be clear and, therefore, understandable.
These results can also be compared with others evaluating other accounting documents that address different topics or questions of different types and natures than those found in such exams. Then, contrasting the findings may be useful in determining whether the findings exhibit a distinct pattern depending on the subjects being evaluated. Lastly, the text characteristics underlying the questions (inputs) and responses from ChatGPT, such as the sentimental analysis (e.g., readability level and message tones) using the proper technological tools, can be useful to check the patterns (similarities and differences) between the answers from ChatGPT that can be attributed to the institutions’ specific profiles and goals or respondents.
There are several applications for this research. However, users must consider its limitations as well. The first relates to the analyses of the features that the tool evaluates as the pertinent inputs that users have supplied. Furthermore, despite the recent rapid evolution of tools driven by machine learning improvements, a significant problem still exists with the analysis it performs, mainly because it prioritises text over context. The third component originates from the subjective viewpoint, which is essential to content analysis, a method commonly employed for these types of studies. To overcome these constraints, researchers can improve or modify the outcomes they obtain from ChatGPT by adding additional inputs and comparing the findings, which is beyond the scope of this study. Finally, it was verified that even with no further inputs, it is possible to obtain different answers from ChatGPT by simply repeating the question, which may indicate a still lower level of reliability from the answers provided by this tool.
In light of the findings and limitations of this research, it is possible to conclude that, though the ChatGPT did not qualify as a certified accountant in Portugal, it scored well enough to make it necessary to reevaluate how and what knowledge and skills are assessed, as Abeysekera (2024) argues, resulting in the need for professional criteria and education to reevaluate knowledge and skills assessment.

Author Contributions

Both authors have equally contributed to this research in what concerns its conceptualization, methodology, software, validation, formal analysis, investigation, resources, data curation, writing—original draft preparation, writing—review and editing, visualization, supervision, project administration, and funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Instituto Politécnico de Lisboa, project IPL/IDI&CA2024/IAccount_ISCAL [Grant number is not applicable]. This study was conducted at the Research Center on Accounting and Taxation (CICF) and was funded by the Portuguese Foundation for Science and Technology (FCT) through national funds (UIDB/04043/2020 and UIDP/04043/2020).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to its characteristic of being date specific.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abeysekera, Indra. 2024. ChatGPT and academia on accounting assessments. Journal of Open Innovation: Technology, Market, and Complexity 10: 100213. [Google Scholar] [CrossRef]
  2. Accounting Today. 2023a. We Had ChatGPT Take the CPA Exam—And It Failed. Available online: https://www.accountingtoday.com/news/we-ran-the-cpa-exam-through-chatgpt-and-it-failed-miserably (accessed on 6 May 2024).
  3. Accounting Today. 2023b. ChatGPT Passes CPA Exam on V. 4.0. Available online: https://www.accountingtoday.com/news/chatgpt-retakes-cpa-exam-as-v4-0-passes (accessed on 6 May 2024).
  4. Al Ghatrifi, Maryam Omer Madhaffar, Jawaher Salim Sultan Al Amairi, and Mohammed Muneerali Thottoli. 2023. Surfing the technology wave: An international perspective on enhancing teaching and learning in accounting. Computers and Education: Artificial Intelligence 4: 100144. [Google Scholar] [CrossRef]
  5. Alshurafat, Hashem, Mohannad Obeid Al Shbail, Allam Hamdan, Ahmad Al-Dmour, and Waed Ensour. 2023. Factors affecting accounting students’ misuse of ChatGPT: An application of the fraud triangle theory. Journal of Financial Reporting and Accounting 22: 274–88. [Google Scholar] [CrossRef]
  6. Ballantine, Joan, Gordon Boyce, and Greg Stoner. 2024. A critical review of AI in accounting education: Threat and opportunity. Critical Perspectives on Accounting 99: 102711. [Google Scholar] [CrossRef]
  7. Bochkay, Khrystyna, Stephen V. Brown, Andrew J. Leone, and Jennifer Wu Tucker. 2023. Textual Analysis in Accounting: What’s Next? Contemporary Accounting Research 40: 765–805. [Google Scholar] [CrossRef]
  8. Bordt, Sebastian, and Ulrike von Luxburg. 2023. Chatgpt participates in a computer science exam. arXiv arXiv:2303.09461. [Google Scholar] [CrossRef]
  9. Boritz, J. Efrim, and Theophanis C. Stratopoulos. 2023. AI and the Accounting Profession: Views from Industry and Academia. Journal of Information Systems 37: 1–9. [Google Scholar] [CrossRef]
  10. Cao, Yi, and Jia Zhai. 2023. Bridging the gap–the impact of ChatGPT on financial research. Journal of Chinese Economic and Business Studies 21: 177–91. [Google Scholar] [CrossRef]
  11. Cheng, Ping, Chang Yu, and Junjina Wang. 2024. An intelligent risk warning model of enterprise internal audit based on deep autoencoder network in the ChatGPT era: Audit case of current accounts. Xitong Gongcheng Lilun yu Shijian/System Engineering Theory and Practice 44: 316–37. [Google Scholar] [CrossRef]
  12. Choi, Jonathan H., Kristin E. Hickman, Amy Monahan, and Daniel Schwarcz. 2023. Chatgpt goes to law school. Journal of Legal Education 71: 387. [Google Scholar] [CrossRef]
  13. Cohen, Sandra, Francesca Manes Rossi, and Isabel Brusca. 2023. Debate: Public sector accounting education and artificial intelligence. Public Money & Management 43: 725–26. [Google Scholar] [CrossRef]
  14. Coram, Paul J., and Leiyu Wang. 2021. The effect of disclosing key audit matters and accounting standard precision on the audit expectation gap. International Journal of Auditing 25: 270–82. [Google Scholar] [CrossRef]
  15. De Villiers, Charl, Ruth Dimes, and Matteo Molinari. 2023. How will AI text generation and processing impact sustainability reporting? Critical analysis, a conceptual framework, and avenues for future research. Sustainability Accounting, Management and Policy Journal 15: 96–118. [Google Scholar] [CrossRef]
  16. Dong, Mengming Michael, Theophanis C. Stratopoulos, and Victor Xiaoqi Wang. 2023. A Scoping Review of ChatGPT Research in Accounting and Finance. Available online: https://ssrn.com/abstract=4680203 (accessed on 3 July 2024).
  17. Fedyk, Anastassia, James Hodson, Natalya Khimich, and Tatiana Fedyk. 2022. Is artificial intelligence improving the audit process? Review of Accounting Studies 27: 938–85. [Google Scholar] [CrossRef]
  18. Fiedler, Benjamin, Eric N. Azua, Todd Phillips, and Adil Shahzad Ahmed. 2024. ChatGPT performance on the American Shoulder and Elbow Surgeons maintenance of certification exam. Journal of Shoulder and Elbow Surgery, 1–6. [Google Scholar] [CrossRef]
  19. Freitas, Marcelo Machado, Jonatas Dutra Sallaberry, and Thiago Bruno de Jesus Silva. 2024. Application of Chatgpt 4.0 for Solving Accounting Problems. The Journal of Globalization, Competitiveness, and Governability 18: 49–64. [Google Scholar] [CrossRef]
  20. Gierusz, Jerzy, Katarzyna Kolesnik, Sylwia Silska-Gembka, Anna Zamojska, and Tamer Elshandidy. 2022. The influence of culture on accounting judgment—Evidence from Poland and the United Kingdom. Cogent Business & Management 9: 1–29. [Google Scholar] [CrossRef]
  21. Gilson, Aidan, Conrad W. Safranek, Thomas Huang, Vimig Socrates, Ling Chi, Richard Andrew Taylor, and David Chartash. 2023. How does CHATGPT perform on the United States Medical Licensing Examination? the implications of large language models for medical education and knowledge assessment. JMIR Medical Education 9: e45312. [Google Scholar] [CrossRef]
  22. Hellmann, Andreas, and Chris Patel. 2021. Translation of International Financial Reporting Standards and implications for judgments and decision-making. Journal of Behavioral and Experimental Finance 30: 100479. [Google Scholar] [CrossRef]
  23. Hellmann, Andreas, Chris Patel, and Noriyuki Tsunogaya. 2021. Foreign-language effect and professionals’ judgments on fair value measurement: Evidence from Germany and the United Kingdom. Journal of Behavioral and Experimental Finance 30: 100478. [Google Scholar] [CrossRef]
  24. Isleem, Ula N., Bashar Zaidat, Renee Ren, Eric A. Geng, Aonnicha Burapachaisri, Justin E. Tang, Jun S. Kim, and Samuel K. Cho. 2024. Can generative artificial intelligence pass the orthopaedic board examination? Journal of Orthopaedics 53: 27–33. [Google Scholar] [CrossRef]
  25. Jarry Trujillo, Cristián Jarry, Javier Vela Ulloa, Gabriel Escalona Vivas, Eugenio Grasset Escobar, Ignacio Villagrán Gutiérrez, Pablo Achurra Tirado, and Julián Varas Cohen. 2024. Surgeons vs. ChatGPT: Assessment and Feedback Performance Based on Real Surgical Scenarios. Journal of Surgical Education 81: 960–66. [Google Scholar] [CrossRef]
  26. Kayser, Kenneth, and Arnesh Telukdarie. 2024. Literature Review: Artificial Intelligence Adoption Within the Accounting Profession Applying the Technology Acceptance Model (3). In Towards Digitally Transforming Accounting and Business Processes, Springer Proceedings in Business and Economics. Edited by Tankiso Moloi and Babu George. Cham: Springer, pp. 217–31. [Google Scholar] [CrossRef]
  27. Lund, Brady D., and Ting Wang. 2023. Chatting about ChatGPT: How may AI and GPT impact academia and libraries? Library Hi Tech News 40: 26–29. [Google Scholar] [CrossRef]
  28. Mackey, Brendan P., Razmig Garabet, Laura Maule, Abay Tadesse, James Cross, and Michael Weingarten. 2024. Evaluating ChatGPT-4 in medical education: An assessment of subject exam performance reveals limitations in clinical curriculum support for students. Discover Artificial Intelligence 4: 38. [Google Scholar] [CrossRef]
  29. Maradona, Agus Fredy, Parmod Chand, and Sumit Lodhia. 2024. Professional skills required by accountants for applying international financial reporting standards: Implications from Indonesia. Meditari Accountancy Research 32: 269–93. [Google Scholar] [CrossRef]
  30. Meral, Gürbüz, Serdal Ateş, Serkan Günay, Ahmet Öztürk, and Mikail Kuşdoğan. 2024. Comparative analysis of ChatGPT, Gemini and emergency medicine specialist in ESI triage assessment. American Journal of Emergency Medicine 81: 146–50. [Google Scholar] [CrossRef]
  31. Papakonstantinidis, Stavros, Piotr Kwiatek, and Filomachi Spathopoulou. 2024. Embrace or resist? Drivers of artificial intelligence writing software adoption in academic and non-academic contexts. Contemporary Educational Technology 16: ep495. [Google Scholar] [CrossRef]
  32. Rana, Tarek, Jan Svanberg, Peter Öhman, and Alan Lowe. 2023. Introduction: Analytics in Accounting and Auditing. In Handbook of Big Data and Analytics in Accounting and Auditing. Edited by Tarek Rana, Jan Svanberg, Peter Öhman and Alan Lowe. Berlin: Springer, pp. 1–13. [Google Scholar] [CrossRef]
  33. Retzlaff, Carl Orge, Srijita Das, Christabel Wayllace, Payam Mousavi, Mohammad Afshari, Tianpei Yang, Anna Saranti, Alessa Angerschmid, Matthew E. Taylor, and Andreas Holzinger. 2024. Human-in-the-Loop Reinforcement Learning: A Survey and Position on Requirements, Challenges, and Opportunities. Journal of Artificial Intelligence Research 79: 359–415. [Google Scholar] [CrossRef]
  34. Rudolph, Jürgen, Samson Tan, and Shannon Tan. 2023. ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning and Teaching 6: 342–63. [Google Scholar] [CrossRef]
  35. Scarfe, Peter, Kelly Watcham, Alasdair Clarke, and Etienne Roesch. 2024. A real-world test of artificial intelligence infiltration of a university examinations system: A Turing Test” case study. PLoS ONE 19: e0305354. [Google Scholar] [CrossRef] [PubMed]
  36. Shchyrba, Iryna, Maryia Savitskaya, Tetiana Fursa, Olena Yeremian, and Yevheniia Ostropolska. 2024. Management accounting: The latest technologies, ChatGPT capabilities. Financial and Credit Activity: Problems of Theory and Practice 1: 160–72. [Google Scholar] [CrossRef]
  37. Shojaee-Mend, Hassan, Reza Mohebbati, Mostafa Amiri, and Alireza Atarodi. 2024. Evaluating the strengths and weaknesses of large language models in answering neurophysiology questions. Scientific Reports 14: 10785. [Google Scholar] [CrossRef] [PubMed]
  38. Stott, Frances A., and David M. Stott. 2023. A Perspective on the Use of ChatGPT in Tax Education. In Advances in Accounting Education: Teaching and Curriculum Innovations (Advances in Accounting Education, Vol. 27). Edited by T. G. Calderon. Leeds: Emerald Publishing Limited, pp. 145–53. [Google Scholar] [CrossRef]
  39. Subramani, Mahila, Ilham Jaleel, and Surapaneni Krishna Mohan. 2023. Evaluating the performance of ChatGPT in medical physiology university examination of phase I MBBS. Advances in Physiology Education 47: 270–71. [Google Scholar] [CrossRef]
  40. Surapaneni, Krishna Mohan, Anusha Rajajagadeesan, Lakshmi Goudhaman, Shalini Lakshmanan, Saranya Sundaramoorthi, Dineshkumar Ravi, Kalaiselvi Rajendiran, and Porchelvan Swaminathan. 2024. EvaluatingChatGPT as a self-learning tool in medical biochemistry: A performance assessment in undergraduate medical university examination. Biochemistry and Molecular Biology Education 52: 237–48. [Google Scholar] [CrossRef] [PubMed]
  41. Tran, Catherine G., Jeremy Chang, Scott K. Sherman, and James P. De Andrade. 2024. Performance of ChatGPT on American Board of Surgery In-Training Examination Preparation Questions. Journal of Surgical Research 299: 329–35. [Google Scholar] [CrossRef]
  42. Tsai, Chung-You, Shang-Ju Hsieh, Hung-Hsiang Huang, Juinn-Horng Deng, Yi-You Huang, and Pai-Yu Cheng. 2024. Performance of ChatGPT on the Taiwan urology board examination: Insights into current strengths and shortcomings. World Journal of Urology 42: 250. [Google Scholar] [CrossRef]
  43. Turan, Engin İhsan, Abdurrahman Engin Baydemir, Funda Gümüş Özcan, and Ayça Sultan Şahin. 2024. Evaluating the accuracy of ChatGPT-4 in predicting ASA scores: A prospective multicentric study ChatGPT-4 in ASA score prediction. Journal of Clinical Anesthesia 96: 111475. [Google Scholar] [CrossRef]
  44. Vasarhelyi, Miklos A., Kevin C. Moffitt, Trevor Stewart, and Dan Sunderland. 2023. Large Language Models: An Emerging Technology. Accounting. Journal of Emerging Technologies in Accounting 20: 1–10. [Google Scholar] [CrossRef]
  45. Wood, David A., Muskan P. Achhpilia, Mollie T. Adams, Sanaz Aghazadeh, Kazeem Akinyele, Mfon Akpan, Kristian D. Allee, Abigail M. Allen, Elizabeth D. Almer, Daniel Ames, and et al. 2023. The ChatGPT Artificial Intelligence Chatbot: How Well Does It Answer Accounting Assessment Questions? Issues in Accounting Education 38: 81–108. [Google Scholar] [CrossRef]
  46. Zadorozhnyi, Zenovii-Mykhailo, Volodymyr Muravskyi, Nataliia Pochynok, Vasyl Muravskyi, Anton Shevchuk, and Martin Majda. 2023. Application of Chatbots with Artificial Intelligence in Accounting. Paper presented at 2023 13th International Conference on Advanced Computer Information Technologies, Wrocław, Poland, September 21–23; Hamilton: ACIT, pp. 196–200. [Google Scholar] [CrossRef]
  47. Zhao, Joanna (Jingwen), and Xinruo Wang. 2024. Unleashing efficiency and insights: Exploring the potential applications and challenges of ChatGPT in accounting. Journal of Corporate Accounting and Finance 35: 269–76. [Google Scholar] [CrossRef]
Table 1. Questions by topics.
Table 1. Questions by topics.
First PartSecond Part
QuestionTopicsQuestionTopics
1General—Financial accounting21Financial accounting
2General—Taxation22Financial accounting
3General—Financial accounting23Financial accounting
4General—Financial accounting (with taxation)24Financial accounting
5General—Taxation25Financial accounting
6General—Taxation26Financial accounting
7General—Statutory and deontological matters27Financial accounting
8General—Statutory and deontological matters28Management accounting
9General—Statutory and deontological matters29Management accounting
10General—Statutory and deontological matters30Management accounting
11General—Statutory and deontological matters31Management accounting
12General—Taxation32Taxation
13General—Taxation33Taxation
14General—Taxation34Taxation
15General—Taxation35Taxation
16General—Management accounting36Statutory and deontological matters
17General—Management accounting37Statutory and deontological matters
18General—Management accounting38Statutory and deontological matters
19General—Management accounting39Statutory and deontological matters
20General—Management accounting40Statutory and deontological matters
Table 2. Expected versus ChatGPT’s answers for the OCC exam.
Table 2. Expected versus ChatGPT’s answers for the OCC exam.
QuestionSectionExpectedChatGPTQuestionSectionExpectedChatGPT
1General—Financial accountingCC21Financial accountingDA
2General—TaxationBB22Financial accountingBB
3General—Financial accountingAC23Financial accountingCD
4General—Financial accounting (with taxation)AA24Financial accountingDA
5General—TaxationCA25Financial accountingAB
6General—TaxationCB/C26Financial accounting*C
7General—Statutory and deontological mattersCA27Financial accountingAA
8General—Statutory and deontological mattersBD28Management accountingCC
9General—Statutory and deontological mattersCB29Management accountingBB
10General—Statutory and deontological mattersAD30Management accountingBB
11General—Statutory and deontological mattersCD31Management accountingBB
12General—TaxationBB32TaxationBA
13General—TaxationBNone33TaxationDC
14General—TaxationDA/C/D34TaxationAA
15General—TaxationBC35TaxationAC
16General—Management accountingCC36Statutory and deontological mattersCA/B/C
17General—Management accountingDD37Statutory and deontological mattersDD
18General—Management accountingBB38Statutory and deontological mattersBB
19General—Management accountingDD39Statutory and deontological mattersAA/B
20General—Management accountingDD40Statutory and deontological mattersCA
Note: * This answer was considered invalid by OCC.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Albuquerque, F.; Gomes dos Santos, P. Can ChatGPT Be a Certified Accountant? Assessing the Responses of ChatGPT for the Professional Access Exam in Portugal. Adm. Sci. 2024, 14, 152. https://doi.org/10.3390/admsci14070152

AMA Style

Albuquerque F, Gomes dos Santos P. Can ChatGPT Be a Certified Accountant? Assessing the Responses of ChatGPT for the Professional Access Exam in Portugal. Administrative Sciences. 2024; 14(7):152. https://doi.org/10.3390/admsci14070152

Chicago/Turabian Style

Albuquerque, Fabio, and Paula Gomes dos Santos. 2024. "Can ChatGPT Be a Certified Accountant? Assessing the Responses of ChatGPT for the Professional Access Exam in Portugal" Administrative Sciences 14, no. 7: 152. https://doi.org/10.3390/admsci14070152

APA Style

Albuquerque, F., & Gomes dos Santos, P. (2024). Can ChatGPT Be a Certified Accountant? Assessing the Responses of ChatGPT for the Professional Access Exam in Portugal. Administrative Sciences, 14(7), 152. https://doi.org/10.3390/admsci14070152

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop