That depends very much on how you upscale it. If you just do a ‘Nearest Neighbour’ upscale to 1080p, then yes, it will be made up of ‘giant pixels’: 2 or 3 pixel wide/tall rectangles. If you perform an upscale with a different algorithm - bilinear, or Lanczos, or any of the upscalers you can use in AE/Premiere like Adobe’s Detail-Preserving Upscale algorithm, or Red Giant Instant 4K etc. - then you most certainly won’t have ‘giant pixels’. Unfortunately, there’s not much point doing that, because those upscalers are, frankly, only slightly better than what your TV already does in real time when you play 480p 576p files on a 1080p or 4K screen.
The real trick comes with machine learning upscalers. Unlike algorithmic upscalers (the ones described previously) which essentially just perform some maths on the image, these actually have some sense of what they’re looking at. They’ve been ‘trained’ for many many hours on pairs of images, one low res and one high res, to be able to notice patterns and recognise things in a way much closer to how we as humans perceive images. The most consumer-friendly of these currently is Topaz Labs’ Video Enhance AI, but others exist like TecoGAN and SOFVSR. Getting a good result out of these requires a good video card, lots of trial and error, and plenty of patience, however.