Just out of curiosity. I have no moral stance on it, if a tool works for you I'm definitely not judging anyone for using it. Do whatever you can to get your work done!
High school history teacher here. It’s changed how I do assessments. I’ve used it to rewrite all of the multiple choice/short answer assessments that I do. Being able to quickly create different versions of an assessment has helped me limit instances of cheating, but also to quickly create modified versions for students who require that (due to IEPs or whatever).
The cool thing that I’ve been using it for is to create different types of assessments that I simply didn’t have the time or resources to create myself. For instance, I’ll have it generate a writing passage making a historical argument, but I’ll have AI make the argument inaccurate or incorrectly use evidence, etc. The students have to refute, support, or modify the passage.
Due to the risk of inaccuracies and hallucination I always 100% verify any AI generated piece that I use in class. But it’s been a game changer for me in education.
I should also add that I fully inform students and administrators that I’m using AI. Whenever I use an assessment that is created with AI I indicate with a little “Created with ChatGPT” tag. As a history teacher I’m a big believer in citing sources :)
How has this been received?
I imagine that pretty soon using ChatGPT is going to be looked down upon like using Wikipedia as a source
I would never accept a student’s use of Wikipedia as a source. However, it’s a great place to go initially to get to grips with a topic quickly. Then you can start to dig into different primary and secondary sources.
Chat GPT is the same. I would never use the content it makes without verifying that content first.
Is it fair to give different students different wordings of the same questions? If one wording is more confusing than another could it impact their grade?
I'm a special education teacher and today I was tasked with writing a baseline assessment for the use of an iPad. Was expecting it to take all day. I tried starting with ChatGPT and it spat out a pretty good one. I added to it and edited it to make it more appropriate for our students, and put it in our standard format, and now I'm done, about an hour after I started.
I did lose 10 minutes to walking round the deserted college (most teachers are gone for the holidays) trying to find someone to share my joy with.
I wish I had that much opportunity to write (or fabricate) my own teaching material. I'm in a standardized testing hellscape where almost every month there's yet another standardized test or preparation for one.
It’s one of the fascinating paradoxes of education that the more you teach to standardized tests, the worse test results tend to be. Improved test scores are a byproduct of strong teaching - they shouldn’t be the only focus.
Teaching is every bit as much an art as it is a science and straight-jacketing teachers with canned curricula only results in worse test scores and a deteriorated school experience for students. I don’t understand how there are admins out there that still operate like this. The failures of No Child Left Behind mean we’ve known this for at least a decade.
A junior team member sent me an AI-generated sick note a few weeks ago. It was many, many neat and equally-sized paragraphs of badly written excuses. I would have accepted "I can't come in to work today because I feel unwell" but now I can't take this person quite so seriously any more.
I had a coworker come to me with an "issue" he learned about. It was wrong and it wasn't really an issue and the it came out that he got it from ChatGPT and didn't really know what he was talking about, nor could he cite an actual source.
I've also played around with it and it's given me straight up wrong answers. I don't think it's really worth it.
It's just predictive text, it's not really AI.
i think learning where it can actually help is a bit of an art - it's just predictive text, but it's very good predictive text - if you know what you need and get good and giving it the right input it can save a huge about of time. you're right though, it doesn't offer much if you don't already know what you need.
Can you hand me an example? I keep hearing this but every time somebody presents something, be it work related or not, it feels like at best it would serve as better lorem ipsum
I’ve had good success using it to write Python scripts for me. They’re simple enough I would be able to write them myself, but it would take a lot of time searching and reading StackOverflow/library docs/etc since I’m an amateur and not a pro. GPT lets me spend more time actually doing the things I need the scripts for.
A use it with web development by describing what I want something to look like and have it generate a React component based on my description.
Is what it gives me the final product? Sometimes, but it’s such a help to knock out a bunch of boilerplate and get me close to what I want.
Also generating documentation is nice. I wanted to fill out some internal wiki articles to help people new to the industry have something to reference. Spent maybe an hour having a conversation asking all of the questions I normally run into. Cleaned up the GPT text, checked for inaccuracies, and cranked out a ton of resources. That would have taken me days, if not weeks.
At the end of the day, GPT is better with words than I am, but it doesn’t have the years of experience I have.
More often than not you need to be very specific and have some knowledge on the stuff you ask it.
However, you can guide it to give you exactly what you want. I feel like knowing how to interact with GPT it’s becoming similar as being good at googling stuff.
I've played around with it for personal amusement, but the output is straight up garbage for my purposes. I'd never use it for work. Anyone entering proprietary company information into it should get a verbal shakedown by their company's information security officer, because anything you input automatically joins their training database, and you're exposing your company to liability when, not if, OpenAI suffers another data breach.
The very act of sharing company information with it can land you and the company in hot water in certain industries. Regardless if OpenAI is broken into.
not chatGPT - but I tried using copilot for a month or two to speed up my work (backend engineer). Wound up unsubscribing and removing the plugin after not too long, because I found it had the opposite effect.
Basically instead of speeding my coding up, it slowed it down, because instead of my thought process being
- Think about the requirements
- Work out how best to achieve those requirements within the code I'm working on
- Write the code
It would be
- Think about the requirements
- Work out how best to achieve those requirements within the code I'm working on
- Start writing the code and wait for the auto complete
- Read the auto complete and decide if it does exactly what I want
- Do one of the following depending on 4 5a. Use the autocomplete as-is 5b. Use the autocomplete then modify to fix a few issues or account for a requirement it missed 5c. Ignore the autocomplete and write the code yourself
idk about you, but the first set of steps just seems like a whole lot less hassle then the second set of steps, especially since for anything that involved any business logic or internal libraries, I found myself using 5c far more often than the other two. And as a bonus, I actually fully understand all the code committed under my username, on account of actually having wrote it.
I will say though in the interest of fairness, there were a few instances where I was blown away with copilot's ability to figure out what I was trying to do and give a solution for it. Most of these times were when I was writing semi-complex DB queries (via Django's ORM), so if you're just writing a dead simple CRUD API without much complex business logic, you may find value in it, but for the most part, I found that it just increased cognitive overhead and time spent on my tickets
EDIT: I did use chatGPT for my peer reviews this year though and thought it worked really well for that sort of thing. I just put in what I liked about my coworkers and where I thought they could improve in simple english and it spat out very professional peer reviews in the format expected by the review form
I use it to write performance reviews because in reality HR has already decided the results before the evaluations.
I'm not wasting my valuable time writing text that is then ignored. If you want a promotion, get a new job.
To be clear: I don't support this but it's the reality I live in.
This is exactly what I use it for. I have to write a lot of justifications for stuff like taking training, buying equipment, going on business travel, etc. - text that will never be seriously read by anyone and is just a check-the-box exercise. The quality and content of the writing is unimportant as long as it contains a few buzz-phrases.
Just chiming in as another person who does this, it's absolutely perfect. I just copy and paste the company bs competencies, add in a few bs thoughts of my own, and tell it to churn out a full review reinforcing how they comply with the listed competencies.
It's perfect, just the kinda bs HR is looking for, I get compliments all the time for them rofl.
Why should anyone care? I don't go around telling people every time I use stack overflow. Gotta keep in mind gpt makes shit up half the time so I of course test and cross reference everything but it's great for narrowing your search space.
The problem with using it is that you might be sending company proprietary or sensitive information to a third party that's going to mine that information and potentially expose that information, either directly or by being hacked. For example, this whole thing with Samsung data: https://techcrunch.com/2023/05/02/samsung-bans-use-of-generative-ai-tools-like-chatgpt-after-april-internal-data-leak/
We've been instructed to use ChatGPT generically. Meaning, you ask it generic questions that have generic usage, like setting up a route in Express. Even if there is something more specific to my company, it almost always can be transformed into something more generic, like "I have a SQL DB with users in it, some users may have the 'age' field, I want to find users that have their age above 30" where age is actually something completely different (but still a number).
Just need to work carefully on ChatGPT.
Yes, although there's been a huge spike in cancer diagnosis I've been giving out since doing so. Whoops!
A friend of mine just used it to write a script for an Amazing Race application video. It was quite good.
How the heck did it access enough source material to be able to imitate something that specific and do it well? Are we humans that predictable?
Only used it a couple of times for work when researching some broad topics like data governance concepts.
It’s a good tool for learning because you can ask it about a subject and then ask it to explain the subject “as a metaphor to improve comprehension” and it does a pretty good job. Just make sure you use some outside resources to ensure you’e not being hallucinated all over.
My bosses use it to write their emails (ESL).
ESL is actually a great use, although there's a risk someone might not catch a hallucination/weird tone issue. Still it would be really helpful there.
yeah my biggest use case it quick summaries of things. it's great getting a few bullet points, and i miss details a lot less.
My supervisor uses ChatGPT to write emails to higher ups and it's kinda embarrassing lol. One email he's not even capitalizing or spell checking, and the next he has these emails are are over explaining simple things and are half irrelevant.
I've used it a couple times when I can't fully put into words that I'm trying to say, but I use it more for inspiration than anything. I've also used it once or twice in my personal life for translating.
There was some issue that came up relating to network shares on a Windows domain that didn't make sense to me and a colleague. I asked GPT to describe why we were seeing whatever behavior and it defined the scope of the feature in a way that completely demystified my coworker. I'm a Mac and Linux guy, so while I could loosely grasp it, it was gone from my mind shortly after. Windows domains and file sharing has always been bizarre to me.
Anyway, we didn't hide it. He gave it credit when explaining the answer to the rest of the team in a meeting. This was around the end of last year. The company since had layoffs and I'm looking for a new job, but I did have it reformat my resume and it did a great job. I've never been great at page-layout stuff, as I'm a plain text warrior.
You can have ChatGPT edit a pdf input? I thought it only took plaintext. This sounds super helpful.
presumably someone has an instance fine-tuned to write coherent latex
More like it took blocks of text and formatted them as bullet points and cleaned up muddled presentation. Sorry for not being clearer.
I use GPT-4 daily. I worked with it to create a quick and convenient app on my smartwatch, which allows it to provide wisdom and guidance fast whenever I need it. For more grandular things, I use its BingChat interface which can search the web and see images. The AI has helped me with understanding how to complete tasks, providing counseling for me, finding bugs in my code, writing functions, teaching me how to use software like Excel and Outlook, and giving me random information about various curiosities that pop into mind.
I don't keep it a secret and tell anyone who asks. Plus it's kinda obvious that something is going on with me. I always wear bone conducting headsets that allow the AI to whisper in my ear without shutting me out to the world, and sometimes talk to my watch
The responses to knowing what I'm doing have almost always been extreme: very positive or very negative. The machine is controversial, and when some can no longer stay in comfortable denial of its efficacy they turn to speaking out against its use
Edit: just fixed its translation method. Now the watch will hear non-english speech and automatically translate it for me too (uses Whisper API)
I know many people my slightly younger than me are using chatgpt to breeze though university assignments. Apparently there's one website that uses gpt that even draws diagrams for you, so you don't have to make 500 UML and class diagrams that take forever to create.