How ChatGPT is changing the job hiring process, from the HR department to coders

32 points by ta_u 2 years ago | 24 comments
  • waboremo 2 years ago
    Beyond the general idea of incorporating chatGPT into usual work; IMO, chatGPT is highlighting all of the biggest sore spots of the hiring process and instead of finding solutions we're just digging our heads further into the sands.

    Job descriptions, resumes, cover letters, filling out forms, writing boilerplate for take home projects, etc. All of these are massive problems in the hiring process, and should be addressed before the entire thing turns into a complete nonsensical clown fest of nobody reading anything unless they're face to face with somebody.

    So much of these things are just leftovers from the old days, we demand a resume not because it's an accurate assessment of history or skill but because "it's tradition". We write job descriptions not based on accurate demands of the job, but to fit in as many keywords as possible in hopes your ideal candidate doesn't miss it. List goes on, but really we can't start addressing these until we admit there's a serious problem - and this time not just relying on Google to maybe come up with something new.

    • loa_in_ 2 years ago
      As I understand work, an unfilled company job has a

          total normalised workload: w
      
      
          c  requirements: r_c(person skill_c, time)
      
          number of positions to fill: n
      
      
          where
      
              time[now, infinity] integrated:
      
                  sum over i:= 1 to c:
      
                      involvement of person ×
      
                      r_i(person skill_i)
      
              = w
      
      
          (solving for total involvement × requirement = total normalised workload,
      
          because it's a boundary of inequality (>=100% of w))
      
      
      That is applicable to set of all living people at the moment L

          finds out about posting: A subset of L
      
          would accept: B subset of L
      
      
          beta(person) := how much of required workload will this person fill after integrating their (involvement x requirement) over their time there
      
      
           time: 0 to inf (now until end of the universe)
      
           involvement(t): -inf to inf (work units)
      
           requirement_i(t): [0, 1] (in units inverse to work units)
      
      
           (a capable and willing person who's not conscious will have involvement=0,
      
           someone who only would work on weekends for a place that's closed on weekends will have person skill_attendance>0 on weekends, but requirement_attendance>0 only on workdays, a disjoint set)
      
      
          a person with beta=1 fills the position until the position ceases to exist
      
          basically nobody has beta=1, that happens probably only if trade becomes obsolete or position is e.g. to paint a specific room)
      
      
          remaining workload after parting ways (company closed, person dead and/or universe ended): 
      
          w_r = w - beta(w)
      
          if w_r = 0 then they never have to hire anyone for this position ever again
      
          otherwise the hiring process repeats with w=w_r
      
      
      
      Here, there's a start to formalizing the problem if someone is willing to look into this. It's my half an hour armchair take on this, because I figured it is all I can do. even if it's not anything at all, I tried and enjoyed the process.

      I think my main point is, hiring processess seem to be very far from addressing the basic parameters of actual process of working somewhere in my view.

      Let's get GPT-4 onto this maybe

    • bartislartfast 2 years ago
      Recently had to hire a new dev, and one of the submissions used chatGPT to generate a solution for our take-home test.

      The really clever thing was, they did a line by line breakdown of the solution chatGPT came up with, keeping certain parts and throwing out others, giving reasons why each part deserved to stay or be tossed.

      Then they wrote a second solution incorporating parts of the first and their own original code, which was genius. They put it all in a readme file with the solution. It instantly put them at the top of our candidate pile.

      Unfortunately, the in person interview didn't go so well so we didn't hire them.

      Smart approach though. If they'd just used ChatGPT to come up with some bog standard interview questions and solutions they'd probably have walked right into a job with us.

      • yieldcrv 2 years ago
        so at what point does your in person interview become the problem?

        what is the line for you when the day to day doesn't involve unassisted time trials, but instead requires code reviews of assisted work and code reviews of colleagues work

        • bartislartfast 2 years ago
          code reviews are not the be-all and end-all of a coding job.

          in-person interview can give you some insight into what it might be like to work with a person day to day, it has more value than a take-home test in a lot of ways.

          • yieldcrv 2 years ago
            does your in-person interview do that?
      • andrewstuart 2 years ago
        I did a coding test with ChatGPT and supplied the prompt plus a screencast of me doing the coding test with ChatGPT.

        Didn’t even get a phone interview.

        I really don’t care though because coding tests are usually a silly waste of time and I’ve had my time completely burned by employers wanting coding tests to which they reply either nothing at all, or some one liner like “we didn’t like it”.

        If you DO are coding test, agree to it only on condition they supply your their assessment methodology first …. a request to which very few employers would respond.

        Coding tests are mostly arbitrary, meaningless, unscientific and evaluate nothing real world.

        I say refuse coding tests and go to another employer.

        • yieldcrv 2 years ago
          I’ve had my time wasted too

          “We just want to see how you think”

          really meaning they wanted unit tests, client side caching, and a particular design pattern

          or really meaning they didn't want you to go outside the scope

          or really meaning that they didn't even know that budget for their whole department could be revoked, so you wasted your time even if the single gatekeeper didn’t arbitrarily move the goal posts on you

          in my experience, nobody can even tell you the details of what the “technical assessment“ will be, I’ve expected coderpads and instead gotten surprise screen sharing requests for using my own IDE, requiring an environment that isn’t even currently set up on that computer if I normally use a work laptop for the last who knows how long. I’ve NEVER seen a takehome project have some clear prewritten unit tests that the outcome had to conform to, I don't think this is a realistic request in this day and age.

          • anuragsoni 2 years ago
            I've definitely had bad experiences with take home tests, but well-run take home setups can be a lot nicer than phone-screens with live coding.

            When I interviewed at my current employer they made it very clear that the take home was meant as a conversation starter, and everyone who gets a take-home made it to the on-site (virtual in this case since its a remote role). This removed the guess work into whether the test will be a waste of time, as there was a guaranteed on-site interview. The test itself was also a more accurate representation of a simpler version of problems that the team was working on, as opposed to a generic algorithm problem.

            Overall this was the most positive interview loop I've ever bene through, and was a big factor in making a decision to say yes to the job offer.

          • kaesar14 2 years ago
            It does seem somewhat hard to imagine being employed as a SWE in 5 years without being proficient in incorporating some sort of generative AI into your work.

            Edited to include “in 5 years”

            • blibble 2 years ago
              the lawsuit against github/microsoft scraping the entire internet for code wins and kills the practice entirely

              there, that was easy to imagine

              • MrDresden 2 years ago
                This will linger in the courts for years if not a decade, all the while the tech and industry will move on. Think Oracle vs Google case involving Java on Android. Was finally settled but didn't have any impact.

                And if it sinks one model, there will be others. Like has been said, the genie is out of the bottle.

                • kaesar14 2 years ago
                  Another tool will just take it's place. It's too useful
                  • blibble 2 years ago
                    if fair use doesn't apply to training... what would it be trained on?

                    Microsoft's code base?

                    who the hell would pay for that?

                • ilaksh 2 years ago
                  It's questionable that there will be many jobs that are very similar at all to SWE of today in five years.

                  There is no reason to believe that the large (language/multimodal) models will stop progressing. In six months the IQ of ChatGPT increased by 30-40 points. If we pretend that these things are linear, (so far they are actually exponential) then that's a 430 IQ in five years. Assuming the acceleration of progress entirely stops and just increases linearly from here on out.

                  There are quite a few people that think this stuff can easily become millions of times smarter. I am doubtful of that in the short term, because I think there are practical limits to compression, but you have to admit that multiple orders of magnitude increases in speed, data size, and efficiency are very plausible. Especially as new compute-in-memory systems start to roll out which is feasible in five years.

                  The non-conservative and increasingly popular belief is that these systems will be hundreds of times faster than humans and fully autonomous within a few years. Which seems extremely foolhardy but also the most likely path due to the inability of society to recognize and adapt to the real danger with enough nuance.

                  So I am wondering if there will be any jobs for any people at that point unless the AI is deeply integrated into their brain.

                  • kaesar14 2 years ago
                    The only counterpoint I have is that it seems the gpt models have trained on most available information on the Internet. So what’s the future progress there exactly.
                    • ilaksh 2 years ago
                      Let's suppose that the datasets don't increase in size. Look at the difference in speed between the old ChatGPT and turbo ChatGPT. Suppose within 6-8 months they can do that again with GPT4.

                      I think that would be about five times faster than a human.

                      But there is more data. Do you really think they have ingested every single YouTube video or movie? Video and video+transcription is the next thing.

                      Another source of data could be to have the models study and reprocess information into more concise language (possibly new vocabulary) or diagrams with the goal of increasing levels of abstraction or information density.

                    • JohnFen 2 years ago
                      > unless the AI is deeply integrated into their brain.

                      At which point, are they people anymore?

                      • anuragsoni 2 years ago
                        [flagged]
                      • dpkirchner 2 years ago
                        Especially when tools like ChatGPT replace StackOverflow. Or augment SO, if Stack Exchange is IMO forward-thinking enough.
                        • morkalork 2 years ago
                          Small things like passing new questions into the AI to check for novelty would be nice. Now, if they could find a way to flag and de-rank obviously out-of-date solutions, that would be cool.
                          • zumu 2 years ago
                            If ChatGPT is trained on StackOverflow and people are using ChatGPT instead of StackOverflow, it seems pretty clear ChatGPT's use of StackOverflow's data is not protected under fair use.