I hadn't considered that smaller packets might be preferable.
&
And yes I was referring to a chunk of data when I say packet.
It is a little important to distinguish that data you are sending from A->B as chunks != to a TCP/IP packet, so may have confused my explanation a bit. For data you the user are sending from A->B, bigger is better.
A single 10mb file will transfer faster than 1000*10kb files as it results in less overhead caused by per file operations that go on in the background (things like opening a file, closing a file, AV scanning the new file, packaging the file into packets to send across a network etc. etc.).
'So what is a packet' you ask. Well, I am not going to try to butcher an explanation of the OSI Model, but the ;tldr is a 'packet is the data you are sending, broken up into chunks automatically that are suitable for your network based on a specific max packet size (MTU, usually ~1500bytes but can be up to ~9000bytes), where each packet has additional information prepended to it that explains to various parts of your OS, and the receiving OS, and the network hardware in between, what to do with said data. This means things like where it's going, where it's from, whether your expecting confirmation of receipt, if so, how to confirm receipt (CRC Checksum) and so on. The vast majority of this you wouldn't normally need to know, as it is abstracted by the Software your using, the Operating System and Networking hardware.
'I get file operations per file would add overhead, but why would multiple small files be less efficient for packets?' you ask. Well when the OS breaks your payload into the requisite packets there is almost always some remainder. Take this example (that ignores all the overhead stuff mentioned above for the sake of simplicity):
You send 64000 bytes of data. This is split in packets of size 1500, resulting in 42.6 packets, or 43 packets. You're sending this 255 times, resulting in 10965 packets
Now you send 1 file that is 16320000bytes (255 times 64000). This is split in packets of size 1500, resulting in 10880 packets, thus more efficient.
'So why is UDP faster' you ask. While TCP waits to receive an acknowledgement that data has been received successfully, UDP does not. This means not only does the header information for the packet not need to include things like a CRC checksum (so more space for your data per packet), it also doesn't have to wait for the acknowledgement the previous packet was received. While in most cases this will mean UDP executes faster, TCP does also handle congestion intelligently and will back off if the receiver can't take the heat, while UDP kinda does the opposite and just floods it with information, so in some scenarios it CAN be slower. In general, TCP is what you want unless your broadcasting/multicasting.
When I read your post, I initially assumed you were actually manually altering the packet size to in essence override the default behaviour of the TCP/IP stack. If you WERE doing this, what I said before applies that you would need to monitor the network for packet drops to find out what your max transmission size is and set this dynamically. This IS a valid thing people used to do back in the day to eek out small performance gains on janky internet connections and manually override their MTU settings. However, what I described is loosely what Path-MTU Discovery does anyway so doubt it needs to ever be done manually anymore.
Sorry if I've just ended up confusing you, I think I confused myself more than anything else (I mostly just wanted to make sure I still understood this crap >.<)