AI has been the biggest boom in the tech industry within the last year or so. With the ability to solve complex problems in math and science, code, and write essays(not great ones), AIs like ChatGPT have been the latest buzz in education. Because of its streamlined nature and ease of use, AI sparks questions about the line between a helpful tool and a cheating machine. For this, it falls to the user. The nature of AI is prompt based, if you write “Write this code for me” will you be learning it? The more specific your prompt, my favorite this semester has been “Without showing any code, explain the thought process behind doing xxx and helpful syntax that would speed up the coding process.” This has helped me learn about the code I am writing rather than copy-pasting straight from ChatGPT. The only exception was WODs, this is because, in a credit/no-credit system, you’d do anything for the credit, especially under a time constraint.
Because the Experiences had tutorial videos, I didn’t see myself using AI as much on these. Sometimes, I would ask it a specific question, treating it more like a personalized Stackoverflow or Geeksforgeeks. I was more likely to ask for help from a friend rather than AI.
For the more simple practice WODs, I would use the prompt I stated in the introduction to coach me through the syntax of different frameworks. When the WODs started having too many files, it became hard to rely on ChatGPT to explain because it would have to pick components from other files. This was especially relevant when we worked with Next.js and Prisma. ChatGPT sucks at command line prompts.
I used AI for a little over half the WODs. Some were simple and straightforward, doable just with the basic practice given in class and from assignments. I noticed that some WODs were difficult, and I had trouble implementing different frameworks, especially Bootstrap and some React components. I would have liked to use AI less, but if my grade is on the line, I’m going to use what’s available to pass.
AI was only used in essays to find synonyms, shorten wordy sentences, or fix grammatical mistakes. This was mostly done with Grammarly, ChatGPT, and later ClaudeAI as I think it responds less robotic than ChatGPT. All concepts and basic structures are human-made.
On the final project, I used Copilot. The integration into vsCode made it a little easier to fix syntax and typing issues on the template page. My team leader for the final project was very advanced and helped my group figure out many technical and coding issues with comments and during our meetings. Copilot had issues recognizing ESLint errors and often had coding suggestions that either broke the code or provided more ESLint errors. It was somewhat successful when given prompts like “Translate this code xxx so it fits with the syntax and variables in component xxx”. This was used a lot since our final project had a lot of recycled elements and it saved the tedious work of finding the one-to-one variable matches for different components and style.css files.
The most helpful way to use AI. AI helped me learn the framework taught in class and broke down the basic concepts of HTML and CSS for me at the beginning of the class. I also used it often during the switch from Bootstrap to React because of how fast the class moved when skipping between frameworks.
If you use AI in class or for a question asked on Discord, maybe you shouldn’t be answering. What if they have a follow-up question, are you going right back to ChatGPT? It is best to answer questions if you are confident in your solution and can explain it authentically.
I didn’t have many smart-questions while taking this course. Most of my questions came from downloading external products incorrectly like Prisma and pgAdmin which took me a week and three different people to install and work properly. AI doesn’t understand software installations and how to mesh installation instructions to your device’s current setup.
I used coding examples a lot when learning functional programming. I would get the built-in functions mixed up and it helped me make sample problems and then I would write out the code for the problem to be graded by ChatGPT.
Helped me understand that jumps from framework to framework as a lot of it had similar functions but different syntax. This allowed me to better understand Bootstrap in terms of HTML and CSS and React in terms of Bootstrap.
This was only done on timed WODs because I needed the grade during the final project for areas like Prisma and db manipulation. I would usually ask for the solution and then work backward.
AI usually automatically puts comments so that is handy when looking through large files. I never used it directly to add comments because I think a decent amount of coding that we did was copy pasted from other assignments just changing the wording. The fact that not a lot of people were looking at my code also meant to me that I could be a little bit more lenient with the commenting that I do.
I would use Copilot and ChatGPT to explain why certain lines of code had red squiggly lines under them and it would usually provide a bad solution because it only has the context of either the line or file that it is in. This proved super difficult in the final project as so many features used codes and functions from other finals that it’s hard to track where everything in one file is originally from.
I basically used it as a coder in dire situations, a replacement for forum-based website explanations, and a spell checker/grammar/thesaurus helper. I did not want to rely on AI as heavily as I did but I fell behind in the middle of the semester and had trouble catching up.
For me, the answer is yes, kind of. I do think that it is a great tool and can help a majority of people easily break down ideas and do basic coding and such. But it can be so easily used to break academic integrity and replace fundamental human processes like problem-solving. If you can use AI responsibly and without blatantly using it to cheat, it’s great. That line is incredibly fine and must not be overstepped. For learning, it is an amazing tool that can help anyone answer personalized questions and removes the stigma of asking “stupid” questions in lectures. By having AI that can be used privately, you can ask it for answers to hyperspecific problems with decent accuracy or at least a blueprint on where to look further for your answer. For the future, I think it is important to better train AI models to suit what it is being used for. For education, especially in computer science, you can remove AIs’ ability to code and just make it explain topics. It would also be helpful to jam it with key books and materials related to computer science, like Introduction to Algorithms by Cormen, et al.
AI can be a great tool if utilized correctly. Use it as an assistant rather than a replacement. You as the user have to put in a majority of the work, problem-solving skills, and decision-making to fully understand what the AI is helping you with. Let it replace typing tedious boilerplates or essay structures, but don’t let it replace the thought process and human quarks.