The measurements may or may not refer to the same thing. There are 8-bits to a byte and on the surface, simple math converts between two expressions. (A KB is 1,024 8-bit bytes per second. A MB is 1,048,576 8-bit bytes per second.)
However, you can only convert between the two with confidence if you understand what's being measured. Browsers and P2P programs are likely reporting "user data" transferred, not raw data transferred. Depending on the application protocol, there may be application-level overhead that reduces the quantity of "user data" transferred because a portion of the data is used by the application to control the transfer. I'm calling it "user data" because application protocols may encode source data in such a manner as to inflate its size. For instance, e-mail attachments and XML documents often use Base64 or MIME encoding to represent binary data. This can add up to 30% to the size of the data that needs to be transferred. If you have a 10Mbps connection that sends MIME data at full speed, 30% of the transfer speed could be used by the application's encoding. This reduces the
effective transfer rate. That is, 10Mbps of data is being transferred but if 30% of it is used by the application to recreate the source data, the "user data transfer speed" (i.e. KB/sec or MB/sec) is significantly less than the raw transfer speed.
If you want to read why MIME/Base64 encoding is used, see this Wikipedia primer:
»
en.wikipedia.org/wiki/MIMEIf you've wrapped your head around that, it's gets even muddier but the concepts are the same. Below the top-level application protocol is the network protocol. Commonly, TCP/IP is used which divides the data into packets and each packet contains a header. The header contains source and destination IP addresses, source and destination ports, packet sequence number, checksum and other "control data". This is underneath or below the MIME application encoding above and it also reduces the size of the "user data" that can be transferred.