mirror of
https://github.com/minio/minio.git
synced 2025-05-21 17:43:48 -04:00
Update Azure Gateway to azure-storage-blob SDK (#8537)
The azure-sdk-for-go/storage package has been in maintenance- only mode since February 2018 (see [1]) and will be deprecated in the future.
This commit is contained in:
parent
5d3d57c12a
commit
947bc8c7d3
641
CREDITS
641
CREDITS
@ -691,616 +691,59 @@ For the lib/nodejs/lib/thrift/json_parse.js:
|
|||||||
|
|
||||||
================================================================
|
================================================================
|
||||||
|
|
||||||
github.com/Azure/azure-sdk-for-go
|
github.com/Azure/azure-pipeline-go
|
||||||
https://github.com/Azure/azure-sdk-for-go
|
https://github.com/Azure/azure-pipeline-go
|
||||||
----------------------------------------------------------------
|
----------------------------------------------------------------
|
||||||
|
|
||||||
Apache License
|
MIT License
|
||||||
Version 2.0, January 2004
|
|
||||||
http://www.apache.org/licenses/
|
|
||||||
|
|
||||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
Copyright (c) Microsoft Corporation. All rights reserved.
|
||||||
|
|
||||||
1. Definitions.
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
of this software and associated documentation files (the "Software"), to deal
|
||||||
|
in the Software without restriction, including without limitation the rights
|
||||||
|
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
copies of the Software, and to permit persons to whom the Software is
|
||||||
|
furnished to do so, subject to the following conditions:
|
||||||
|
|
||||||
"License" shall mean the terms and conditions for use, reproduction,
|
The above copyright notice and this permission notice shall be included in all
|
||||||
and distribution as defined by Sections 1 through 9 of this document.
|
copies or substantial portions of the Software.
|
||||||
|
|
||||||
"Licensor" shall mean the copyright owner or entity authorized by
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
the copyright owner that is granting the License.
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
"Legal Entity" shall mean the union of the acting entity and all
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
other entities that control, are controlled by, or are under common
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
control with that entity. For the purposes of this definition,
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
"control" means (i) the power, direct or indirect, to cause the
|
SOFTWARE
|
||||||
direction or management of such entity, whether by contract or
|
|
||||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
|
||||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
|
||||||
|
|
||||||
"You" (or "Your") shall mean an individual or Legal Entity
|
|
||||||
exercising permissions granted by this License.
|
|
||||||
|
|
||||||
"Source" form shall mean the preferred form for making modifications,
|
|
||||||
including but not limited to software source code, documentation
|
|
||||||
source, and configuration files.
|
|
||||||
|
|
||||||
"Object" form shall mean any form resulting from mechanical
|
|
||||||
transformation or translation of a Source form, including but
|
|
||||||
not limited to compiled object code, generated documentation,
|
|
||||||
and conversions to other media types.
|
|
||||||
|
|
||||||
"Work" shall mean the work of authorship, whether in Source or
|
|
||||||
Object form, made available under the License, as indicated by a
|
|
||||||
copyright notice that is included in or attached to the work
|
|
||||||
(an example is provided in the Appendix below).
|
|
||||||
|
|
||||||
"Derivative Works" shall mean any work, whether in Source or Object
|
|
||||||
form, that is based on (or derived from) the Work and for which the
|
|
||||||
editorial revisions, annotations, elaborations, or other modifications
|
|
||||||
represent, as a whole, an original work of authorship. For the purposes
|
|
||||||
of this License, Derivative Works shall not include works that remain
|
|
||||||
separable from, or merely link (or bind by name) to the interfaces of,
|
|
||||||
the Work and Derivative Works thereof.
|
|
||||||
|
|
||||||
"Contribution" shall mean any work of authorship, including
|
|
||||||
the original version of the Work and any modifications or additions
|
|
||||||
to that Work or Derivative Works thereof, that is intentionally
|
|
||||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
|
||||||
or by an individual or Legal Entity authorized to submit on behalf of
|
|
||||||
the copyright owner. For the purposes of this definition, "submitted"
|
|
||||||
means any form of electronic, verbal, or written communication sent
|
|
||||||
to the Licensor or its representatives, including but not limited to
|
|
||||||
communication on electronic mailing lists, source code control systems,
|
|
||||||
and issue tracking systems that are managed by, or on behalf of, the
|
|
||||||
Licensor for the purpose of discussing and improving the Work, but
|
|
||||||
excluding communication that is conspicuously marked or otherwise
|
|
||||||
designated in writing by the copyright owner as "Not a Contribution."
|
|
||||||
|
|
||||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
|
||||||
on behalf of whom a Contribution has been received by Licensor and
|
|
||||||
subsequently incorporated within the Work.
|
|
||||||
|
|
||||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
copyright license to reproduce, prepare Derivative Works of,
|
|
||||||
publicly display, publicly perform, sublicense, and distribute the
|
|
||||||
Work and such Derivative Works in Source or Object form.
|
|
||||||
|
|
||||||
3. Grant of Patent License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
(except as stated in this section) patent license to make, have made,
|
|
||||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
|
||||||
where such license applies only to those patent claims licensable
|
|
||||||
by such Contributor that are necessarily infringed by their
|
|
||||||
Contribution(s) alone or by combination of their Contribution(s)
|
|
||||||
with the Work to which such Contribution(s) was submitted. If You
|
|
||||||
institute patent litigation against any entity (including a
|
|
||||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
|
||||||
or a Contribution incorporated within the Work constitutes direct
|
|
||||||
or contributory patent infringement, then any patent licenses
|
|
||||||
granted to You under this License for that Work shall terminate
|
|
||||||
as of the date such litigation is filed.
|
|
||||||
|
|
||||||
4. Redistribution. You may reproduce and distribute copies of the
|
|
||||||
Work or Derivative Works thereof in any medium, with or without
|
|
||||||
modifications, and in Source or Object form, provided that You
|
|
||||||
meet the following conditions:
|
|
||||||
|
|
||||||
(a) You must give any other recipients of the Work or
|
|
||||||
Derivative Works a copy of this License; and
|
|
||||||
|
|
||||||
(b) You must cause any modified files to carry prominent notices
|
|
||||||
stating that You changed the files; and
|
|
||||||
|
|
||||||
(c) You must retain, in the Source form of any Derivative Works
|
|
||||||
that You distribute, all copyright, patent, trademark, and
|
|
||||||
attribution notices from the Source form of the Work,
|
|
||||||
excluding those notices that do not pertain to any part of
|
|
||||||
the Derivative Works; and
|
|
||||||
|
|
||||||
(d) If the Work includes a "NOTICE" text file as part of its
|
|
||||||
distribution, then any Derivative Works that You distribute must
|
|
||||||
include a readable copy of the attribution notices contained
|
|
||||||
within such NOTICE file, excluding those notices that do not
|
|
||||||
pertain to any part of the Derivative Works, in at least one
|
|
||||||
of the following places: within a NOTICE text file distributed
|
|
||||||
as part of the Derivative Works; within the Source form or
|
|
||||||
documentation, if provided along with the Derivative Works; or,
|
|
||||||
within a display generated by the Derivative Works, if and
|
|
||||||
wherever such third-party notices normally appear. The contents
|
|
||||||
of the NOTICE file are for informational purposes only and
|
|
||||||
do not modify the License. You may add Your own attribution
|
|
||||||
notices within Derivative Works that You distribute, alongside
|
|
||||||
or as an addendum to the NOTICE text from the Work, provided
|
|
||||||
that such additional attribution notices cannot be construed
|
|
||||||
as modifying the License.
|
|
||||||
|
|
||||||
You may add Your own copyright statement to Your modifications and
|
|
||||||
may provide additional or different license terms and conditions
|
|
||||||
for use, reproduction, or distribution of Your modifications, or
|
|
||||||
for any such Derivative Works as a whole, provided Your use,
|
|
||||||
reproduction, and distribution of the Work otherwise complies with
|
|
||||||
the conditions stated in this License.
|
|
||||||
|
|
||||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
|
||||||
any Contribution intentionally submitted for inclusion in the Work
|
|
||||||
by You to the Licensor shall be under the terms and conditions of
|
|
||||||
this License, without any additional terms or conditions.
|
|
||||||
Notwithstanding the above, nothing herein shall supersede or modify
|
|
||||||
the terms of any separate license agreement you may have executed
|
|
||||||
with Licensor regarding such Contributions.
|
|
||||||
|
|
||||||
6. Trademarks. This License does not grant permission to use the trade
|
|
||||||
names, trademarks, service marks, or product names of the Licensor,
|
|
||||||
except as required for reasonable and customary use in describing the
|
|
||||||
origin of the Work and reproducing the content of the NOTICE file.
|
|
||||||
|
|
||||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
|
||||||
agreed to in writing, Licensor provides the Work (and each
|
|
||||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
implied, including, without limitation, any warranties or conditions
|
|
||||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
|
||||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
|
||||||
appropriateness of using or redistributing the Work and assume any
|
|
||||||
risks associated with Your exercise of permissions under this License.
|
|
||||||
|
|
||||||
8. Limitation of Liability. In no event and under no legal theory,
|
|
||||||
whether in tort (including negligence), contract, or otherwise,
|
|
||||||
unless required by applicable law (such as deliberate and grossly
|
|
||||||
negligent acts) or agreed to in writing, shall any Contributor be
|
|
||||||
liable to You for damages, including any direct, indirect, special,
|
|
||||||
incidental, or consequential damages of any character arising as a
|
|
||||||
result of this License or out of the use or inability to use the
|
|
||||||
Work (including but not limited to damages for loss of goodwill,
|
|
||||||
work stoppage, computer failure or malfunction, or any and all
|
|
||||||
other commercial damages or losses), even if such Contributor
|
|
||||||
has been advised of the possibility of such damages.
|
|
||||||
|
|
||||||
9. Accepting Warranty or Additional Liability. While redistributing
|
|
||||||
the Work or Derivative Works thereof, You may choose to offer,
|
|
||||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
|
||||||
or other liability obligations and/or rights consistent with this
|
|
||||||
License. However, in accepting such obligations, You may act only
|
|
||||||
on Your own behalf and on Your sole responsibility, not on behalf
|
|
||||||
of any other Contributor, and only if You agree to indemnify,
|
|
||||||
defend, and hold each Contributor harmless for any liability
|
|
||||||
incurred by, or claims asserted against, such Contributor by reason
|
|
||||||
of your accepting any such warranty or additional liability.
|
|
||||||
|
|
||||||
END OF TERMS AND CONDITIONS
|
|
||||||
|
|
||||||
APPENDIX: How to apply the Apache License to your work.
|
|
||||||
|
|
||||||
To apply the Apache License to your work, attach the following
|
|
||||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
|
||||||
replaced with your own identifying information. (Don't include
|
|
||||||
the brackets!) The text should be enclosed in the appropriate
|
|
||||||
comment syntax for the file format. We also recommend that a
|
|
||||||
file or class name and description of purpose be included on the
|
|
||||||
same "printed page" as the copyright notice for easier
|
|
||||||
identification within third-party archives.
|
|
||||||
|
|
||||||
Copyright 2016 Microsoft Corporation
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
you may not use this file except in compliance with the License.
|
|
||||||
You may obtain a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
See the License for the specific language governing permissions and
|
|
||||||
limitations under the License.
|
|
||||||
|
|
||||||
================================================================
|
================================================================
|
||||||
|
|
||||||
github.com/Azure/azure-sdk-for-go
|
github.com/Azure/azure-storage-blob-go
|
||||||
https://github.com/Azure/azure-sdk-for-go
|
https://github.com/Azure/azure-storage-blob-go
|
||||||
----------------------------------------------------------------
|
----------------------------------------------------------------
|
||||||
|
|
||||||
Apache License
|
MIT License
|
||||||
Version 2.0, January 2004
|
|
||||||
http://www.apache.org/licenses/
|
Copyright (c) Microsoft Corporation. All rights reserved.
|
||||||
|
|
||||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
of this software and associated documentation files (the "Software"), to deal
|
||||||
1. Definitions.
|
in the Software without restriction, including without limitation the rights
|
||||||
|
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
"License" shall mean the terms and conditions for use, reproduction,
|
copies of the Software, and to permit persons to whom the Software is
|
||||||
and distribution as defined by Sections 1 through 9 of this document.
|
furnished to do so, subject to the following conditions:
|
||||||
|
|
||||||
"Licensor" shall mean the copyright owner or entity authorized by
|
The above copyright notice and this permission notice shall be included in all
|
||||||
the copyright owner that is granting the License.
|
copies or substantial portions of the Software.
|
||||||
|
|
||||||
"Legal Entity" shall mean the union of the acting entity and all
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
other entities that control, are controlled by, or are under common
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
control with that entity. For the purposes of this definition,
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
"control" means (i) the power, direct or indirect, to cause the
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
direction or management of such entity, whether by contract or
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
SOFTWARE
|
||||||
|
|
||||||
"You" (or "Your") shall mean an individual or Legal Entity
|
|
||||||
exercising permissions granted by this License.
|
|
||||||
|
|
||||||
"Source" form shall mean the preferred form for making modifications,
|
|
||||||
including but not limited to software source code, documentation
|
|
||||||
source, and configuration files.
|
|
||||||
|
|
||||||
"Object" form shall mean any form resulting from mechanical
|
|
||||||
transformation or translation of a Source form, including but
|
|
||||||
not limited to compiled object code, generated documentation,
|
|
||||||
and conversions to other media types.
|
|
||||||
|
|
||||||
"Work" shall mean the work of authorship, whether in Source or
|
|
||||||
Object form, made available under the License, as indicated by a
|
|
||||||
copyright notice that is included in or attached to the work
|
|
||||||
(an example is provided in the Appendix below).
|
|
||||||
|
|
||||||
"Derivative Works" shall mean any work, whether in Source or Object
|
|
||||||
form, that is based on (or derived from) the Work and for which the
|
|
||||||
editorial revisions, annotations, elaborations, or other modifications
|
|
||||||
represent, as a whole, an original work of authorship. For the purposes
|
|
||||||
of this License, Derivative Works shall not include works that remain
|
|
||||||
separable from, or merely link (or bind by name) to the interfaces of,
|
|
||||||
the Work and Derivative Works thereof.
|
|
||||||
|
|
||||||
"Contribution" shall mean any work of authorship, including
|
|
||||||
the original version of the Work and any modifications or additions
|
|
||||||
to that Work or Derivative Works thereof, that is intentionally
|
|
||||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
|
||||||
or by an individual or Legal Entity authorized to submit on behalf of
|
|
||||||
the copyright owner. For the purposes of this definition, "submitted"
|
|
||||||
means any form of electronic, verbal, or written communication sent
|
|
||||||
to the Licensor or its representatives, including but not limited to
|
|
||||||
communication on electronic mailing lists, source code control systems,
|
|
||||||
and issue tracking systems that are managed by, or on behalf of, the
|
|
||||||
Licensor for the purpose of discussing and improving the Work, but
|
|
||||||
excluding communication that is conspicuously marked or otherwise
|
|
||||||
designated in writing by the copyright owner as "Not a Contribution."
|
|
||||||
|
|
||||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
|
||||||
on behalf of whom a Contribution has been received by Licensor and
|
|
||||||
subsequently incorporated within the Work.
|
|
||||||
|
|
||||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
copyright license to reproduce, prepare Derivative Works of,
|
|
||||||
publicly display, publicly perform, sublicense, and distribute the
|
|
||||||
Work and such Derivative Works in Source or Object form.
|
|
||||||
|
|
||||||
3. Grant of Patent License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
(except as stated in this section) patent license to make, have made,
|
|
||||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
|
||||||
where such license applies only to those patent claims licensable
|
|
||||||
by such Contributor that are necessarily infringed by their
|
|
||||||
Contribution(s) alone or by combination of their Contribution(s)
|
|
||||||
with the Work to which such Contribution(s) was submitted. If You
|
|
||||||
institute patent litigation against any entity (including a
|
|
||||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
|
||||||
or a Contribution incorporated within the Work constitutes direct
|
|
||||||
or contributory patent infringement, then any patent licenses
|
|
||||||
granted to You under this License for that Work shall terminate
|
|
||||||
as of the date such litigation is filed.
|
|
||||||
|
|
||||||
4. Redistribution. You may reproduce and distribute copies of the
|
|
||||||
Work or Derivative Works thereof in any medium, with or without
|
|
||||||
modifications, and in Source or Object form, provided that You
|
|
||||||
meet the following conditions:
|
|
||||||
|
|
||||||
(a) You must give any other recipients of the Work or
|
|
||||||
Derivative Works a copy of this License; and
|
|
||||||
|
|
||||||
(b) You must cause any modified files to carry prominent notices
|
|
||||||
stating that You changed the files; and
|
|
||||||
|
|
||||||
(c) You must retain, in the Source form of any Derivative Works
|
|
||||||
that You distribute, all copyright, patent, trademark, and
|
|
||||||
attribution notices from the Source form of the Work,
|
|
||||||
excluding those notices that do not pertain to any part of
|
|
||||||
the Derivative Works; and
|
|
||||||
|
|
||||||
(d) If the Work includes a "NOTICE" text file as part of its
|
|
||||||
distribution, then any Derivative Works that You distribute must
|
|
||||||
include a readable copy of the attribution notices contained
|
|
||||||
within such NOTICE file, excluding those notices that do not
|
|
||||||
pertain to any part of the Derivative Works, in at least one
|
|
||||||
of the following places: within a NOTICE text file distributed
|
|
||||||
as part of the Derivative Works; within the Source form or
|
|
||||||
documentation, if provided along with the Derivative Works; or,
|
|
||||||
within a display generated by the Derivative Works, if and
|
|
||||||
wherever such third-party notices normally appear. The contents
|
|
||||||
of the NOTICE file are for informational purposes only and
|
|
||||||
do not modify the License. You may add Your own attribution
|
|
||||||
notices within Derivative Works that You distribute, alongside
|
|
||||||
or as an addendum to the NOTICE text from the Work, provided
|
|
||||||
that such additional attribution notices cannot be construed
|
|
||||||
as modifying the License.
|
|
||||||
|
|
||||||
You may add Your own copyright statement to Your modifications and
|
|
||||||
may provide additional or different license terms and conditions
|
|
||||||
for use, reproduction, or distribution of Your modifications, or
|
|
||||||
for any such Derivative Works as a whole, provided Your use,
|
|
||||||
reproduction, and distribution of the Work otherwise complies with
|
|
||||||
the conditions stated in this License.
|
|
||||||
|
|
||||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
|
||||||
any Contribution intentionally submitted for inclusion in the Work
|
|
||||||
by You to the Licensor shall be under the terms and conditions of
|
|
||||||
this License, without any additional terms or conditions.
|
|
||||||
Notwithstanding the above, nothing herein shall supersede or modify
|
|
||||||
the terms of any separate license agreement you may have executed
|
|
||||||
with Licensor regarding such Contributions.
|
|
||||||
|
|
||||||
6. Trademarks. This License does not grant permission to use the trade
|
|
||||||
names, trademarks, service marks, or product names of the Licensor,
|
|
||||||
except as required for reasonable and customary use in describing the
|
|
||||||
origin of the Work and reproducing the content of the NOTICE file.
|
|
||||||
|
|
||||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
|
||||||
agreed to in writing, Licensor provides the Work (and each
|
|
||||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
implied, including, without limitation, any warranties or conditions
|
|
||||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
|
||||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
|
||||||
appropriateness of using or redistributing the Work and assume any
|
|
||||||
risks associated with Your exercise of permissions under this License.
|
|
||||||
|
|
||||||
8. Limitation of Liability. In no event and under no legal theory,
|
|
||||||
whether in tort (including negligence), contract, or otherwise,
|
|
||||||
unless required by applicable law (such as deliberate and grossly
|
|
||||||
negligent acts) or agreed to in writing, shall any Contributor be
|
|
||||||
liable to You for damages, including any direct, indirect, special,
|
|
||||||
incidental, or consequential damages of any character arising as a
|
|
||||||
result of this License or out of the use or inability to use the
|
|
||||||
Work (including but not limited to damages for loss of goodwill,
|
|
||||||
work stoppage, computer failure or malfunction, or any and all
|
|
||||||
other commercial damages or losses), even if such Contributor
|
|
||||||
has been advised of the possibility of such damages.
|
|
||||||
|
|
||||||
9. Accepting Warranty or Additional Liability. While redistributing
|
|
||||||
the Work or Derivative Works thereof, You may choose to offer,
|
|
||||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
|
||||||
or other liability obligations and/or rights consistent with this
|
|
||||||
License. However, in accepting such obligations, You may act only
|
|
||||||
on Your own behalf and on Your sole responsibility, not on behalf
|
|
||||||
of any other Contributor, and only if You agree to indemnify,
|
|
||||||
defend, and hold each Contributor harmless for any liability
|
|
||||||
incurred by, or claims asserted against, such Contributor by reason
|
|
||||||
of your accepting any such warranty or additional liability.
|
|
||||||
|
|
||||||
END OF TERMS AND CONDITIONS
|
|
||||||
|
|
||||||
APPENDIX: How to apply the Apache License to your work.
|
|
||||||
|
|
||||||
To apply the Apache License to your work, attach the following
|
|
||||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
|
||||||
replaced with your own identifying information. (Don't include
|
|
||||||
the brackets!) The text should be enclosed in the appropriate
|
|
||||||
comment syntax for the file format. We also recommend that a
|
|
||||||
file or class name and description of purpose be included on the
|
|
||||||
same "printed page" as the copyright notice for easier
|
|
||||||
identification within third-party archives.
|
|
||||||
|
|
||||||
Copyright 2016 Microsoft Corporation
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
you may not use this file except in compliance with the License.
|
|
||||||
You may obtain a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
See the License for the specific language governing permissions and
|
|
||||||
limitations under the License.
|
|
||||||
|
|
||||||
================================================================
|
|
||||||
|
|
||||||
github.com/Azure/go-autorest
|
|
||||||
https://github.com/Azure/go-autorest
|
|
||||||
----------------------------------------------------------------
|
|
||||||
|
|
||||||
Apache License
|
|
||||||
Version 2.0, January 2004
|
|
||||||
http://www.apache.org/licenses/
|
|
||||||
|
|
||||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
|
||||||
|
|
||||||
1. Definitions.
|
|
||||||
|
|
||||||
"License" shall mean the terms and conditions for use, reproduction,
|
|
||||||
and distribution as defined by Sections 1 through 9 of this document.
|
|
||||||
|
|
||||||
"Licensor" shall mean the copyright owner or entity authorized by
|
|
||||||
the copyright owner that is granting the License.
|
|
||||||
|
|
||||||
"Legal Entity" shall mean the union of the acting entity and all
|
|
||||||
other entities that control, are controlled by, or are under common
|
|
||||||
control with that entity. For the purposes of this definition,
|
|
||||||
"control" means (i) the power, direct or indirect, to cause the
|
|
||||||
direction or management of such entity, whether by contract or
|
|
||||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
|
||||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
|
||||||
|
|
||||||
"You" (or "Your") shall mean an individual or Legal Entity
|
|
||||||
exercising permissions granted by this License.
|
|
||||||
|
|
||||||
"Source" form shall mean the preferred form for making modifications,
|
|
||||||
including but not limited to software source code, documentation
|
|
||||||
source, and configuration files.
|
|
||||||
|
|
||||||
"Object" form shall mean any form resulting from mechanical
|
|
||||||
transformation or translation of a Source form, including but
|
|
||||||
not limited to compiled object code, generated documentation,
|
|
||||||
and conversions to other media types.
|
|
||||||
|
|
||||||
"Work" shall mean the work of authorship, whether in Source or
|
|
||||||
Object form, made available under the License, as indicated by a
|
|
||||||
copyright notice that is included in or attached to the work
|
|
||||||
(an example is provided in the Appendix below).
|
|
||||||
|
|
||||||
"Derivative Works" shall mean any work, whether in Source or Object
|
|
||||||
form, that is based on (or derived from) the Work and for which the
|
|
||||||
editorial revisions, annotations, elaborations, or other modifications
|
|
||||||
represent, as a whole, an original work of authorship. For the purposes
|
|
||||||
of this License, Derivative Works shall not include works that remain
|
|
||||||
separable from, or merely link (or bind by name) to the interfaces of,
|
|
||||||
the Work and Derivative Works thereof.
|
|
||||||
|
|
||||||
"Contribution" shall mean any work of authorship, including
|
|
||||||
the original version of the Work and any modifications or additions
|
|
||||||
to that Work or Derivative Works thereof, that is intentionally
|
|
||||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
|
||||||
or by an individual or Legal Entity authorized to submit on behalf of
|
|
||||||
the copyright owner. For the purposes of this definition, "submitted"
|
|
||||||
means any form of electronic, verbal, or written communication sent
|
|
||||||
to the Licensor or its representatives, including but not limited to
|
|
||||||
communication on electronic mailing lists, source code control systems,
|
|
||||||
and issue tracking systems that are managed by, or on behalf of, the
|
|
||||||
Licensor for the purpose of discussing and improving the Work, but
|
|
||||||
excluding communication that is conspicuously marked or otherwise
|
|
||||||
designated in writing by the copyright owner as "Not a Contribution."
|
|
||||||
|
|
||||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
|
||||||
on behalf of whom a Contribution has been received by Licensor and
|
|
||||||
subsequently incorporated within the Work.
|
|
||||||
|
|
||||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
copyright license to reproduce, prepare Derivative Works of,
|
|
||||||
publicly display, publicly perform, sublicense, and distribute the
|
|
||||||
Work and such Derivative Works in Source or Object form.
|
|
||||||
|
|
||||||
3. Grant of Patent License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
(except as stated in this section) patent license to make, have made,
|
|
||||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
|
||||||
where such license applies only to those patent claims licensable
|
|
||||||
by such Contributor that are necessarily infringed by their
|
|
||||||
Contribution(s) alone or by combination of their Contribution(s)
|
|
||||||
with the Work to which such Contribution(s) was submitted. If You
|
|
||||||
institute patent litigation against any entity (including a
|
|
||||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
|
||||||
or a Contribution incorporated within the Work constitutes direct
|
|
||||||
or contributory patent infringement, then any patent licenses
|
|
||||||
granted to You under this License for that Work shall terminate
|
|
||||||
as of the date such litigation is filed.
|
|
||||||
|
|
||||||
4. Redistribution. You may reproduce and distribute copies of the
|
|
||||||
Work or Derivative Works thereof in any medium, with or without
|
|
||||||
modifications, and in Source or Object form, provided that You
|
|
||||||
meet the following conditions:
|
|
||||||
|
|
||||||
(a) You must give any other recipients of the Work or
|
|
||||||
Derivative Works a copy of this License; and
|
|
||||||
|
|
||||||
(b) You must cause any modified files to carry prominent notices
|
|
||||||
stating that You changed the files; and
|
|
||||||
|
|
||||||
(c) You must retain, in the Source form of any Derivative Works
|
|
||||||
that You distribute, all copyright, patent, trademark, and
|
|
||||||
attribution notices from the Source form of the Work,
|
|
||||||
excluding those notices that do not pertain to any part of
|
|
||||||
the Derivative Works; and
|
|
||||||
|
|
||||||
(d) If the Work includes a "NOTICE" text file as part of its
|
|
||||||
distribution, then any Derivative Works that You distribute must
|
|
||||||
include a readable copy of the attribution notices contained
|
|
||||||
within such NOTICE file, excluding those notices that do not
|
|
||||||
pertain to any part of the Derivative Works, in at least one
|
|
||||||
of the following places: within a NOTICE text file distributed
|
|
||||||
as part of the Derivative Works; within the Source form or
|
|
||||||
documentation, if provided along with the Derivative Works; or,
|
|
||||||
within a display generated by the Derivative Works, if and
|
|
||||||
wherever such third-party notices normally appear. The contents
|
|
||||||
of the NOTICE file are for informational purposes only and
|
|
||||||
do not modify the License. You may add Your own attribution
|
|
||||||
notices within Derivative Works that You distribute, alongside
|
|
||||||
or as an addendum to the NOTICE text from the Work, provided
|
|
||||||
that such additional attribution notices cannot be construed
|
|
||||||
as modifying the License.
|
|
||||||
|
|
||||||
You may add Your own copyright statement to Your modifications and
|
|
||||||
may provide additional or different license terms and conditions
|
|
||||||
for use, reproduction, or distribution of Your modifications, or
|
|
||||||
for any such Derivative Works as a whole, provided Your use,
|
|
||||||
reproduction, and distribution of the Work otherwise complies with
|
|
||||||
the conditions stated in this License.
|
|
||||||
|
|
||||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
|
||||||
any Contribution intentionally submitted for inclusion in the Work
|
|
||||||
by You to the Licensor shall be under the terms and conditions of
|
|
||||||
this License, without any additional terms or conditions.
|
|
||||||
Notwithstanding the above, nothing herein shall supersede or modify
|
|
||||||
the terms of any separate license agreement you may have executed
|
|
||||||
with Licensor regarding such Contributions.
|
|
||||||
|
|
||||||
6. Trademarks. This License does not grant permission to use the trade
|
|
||||||
names, trademarks, service marks, or product names of the Licensor,
|
|
||||||
except as required for reasonable and customary use in describing the
|
|
||||||
origin of the Work and reproducing the content of the NOTICE file.
|
|
||||||
|
|
||||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
|
||||||
agreed to in writing, Licensor provides the Work (and each
|
|
||||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
implied, including, without limitation, any warranties or conditions
|
|
||||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
|
||||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
|
||||||
appropriateness of using or redistributing the Work and assume any
|
|
||||||
risks associated with Your exercise of permissions under this License.
|
|
||||||
|
|
||||||
8. Limitation of Liability. In no event and under no legal theory,
|
|
||||||
whether in tort (including negligence), contract, or otherwise,
|
|
||||||
unless required by applicable law (such as deliberate and grossly
|
|
||||||
negligent acts) or agreed to in writing, shall any Contributor be
|
|
||||||
liable to You for damages, including any direct, indirect, special,
|
|
||||||
incidental, or consequential damages of any character arising as a
|
|
||||||
result of this License or out of the use or inability to use the
|
|
||||||
Work (including but not limited to damages for loss of goodwill,
|
|
||||||
work stoppage, computer failure or malfunction, or any and all
|
|
||||||
other commercial damages or losses), even if such Contributor
|
|
||||||
has been advised of the possibility of such damages.
|
|
||||||
|
|
||||||
9. Accepting Warranty or Additional Liability. While redistributing
|
|
||||||
the Work or Derivative Works thereof, You may choose to offer,
|
|
||||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
|
||||||
or other liability obligations and/or rights consistent with this
|
|
||||||
License. However, in accepting such obligations, You may act only
|
|
||||||
on Your own behalf and on Your sole responsibility, not on behalf
|
|
||||||
of any other Contributor, and only if You agree to indemnify,
|
|
||||||
defend, and hold each Contributor harmless for any liability
|
|
||||||
incurred by, or claims asserted against, such Contributor by reason
|
|
||||||
of your accepting any such warranty or additional liability.
|
|
||||||
|
|
||||||
END OF TERMS AND CONDITIONS
|
|
||||||
|
|
||||||
Copyright 2015 Microsoft Corporation
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
you may not use this file except in compliance with the License.
|
|
||||||
You may obtain a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
See the License for the specific language governing permissions and
|
|
||||||
limitations under the License.
|
|
||||||
|
|
||||||
================================================================
|
================================================================
|
||||||
|
|
||||||
|
@ -23,7 +23,7 @@ import (
|
|||||||
"net/http"
|
"net/http"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/Azure/azure-sdk-for-go/storage"
|
"github.com/Azure/azure-storage-blob-go/azblob"
|
||||||
"github.com/aliyun/aliyun-oss-go-sdk/oss"
|
"github.com/aliyun/aliyun-oss-go-sdk/oss"
|
||||||
"google.golang.org/api/googleapi"
|
"google.golang.org/api/googleapi"
|
||||||
|
|
||||||
@ -1796,11 +1796,11 @@ func toAPIError(ctx context.Context, err error) APIError {
|
|||||||
apiErr.Code = e.Errors[0].Reason
|
apiErr.Code = e.Errors[0].Reason
|
||||||
|
|
||||||
}
|
}
|
||||||
case storage.AzureStorageServiceError:
|
case azblob.StorageError:
|
||||||
apiErr = APIError{
|
apiErr = APIError{
|
||||||
Code: e.Code,
|
Code: string(e.ServiceCode()),
|
||||||
Description: e.Message,
|
Description: e.Error(),
|
||||||
HTTPStatusCode: e.StatusCode,
|
HTTPStatusCode: e.Response().StatusCode,
|
||||||
}
|
}
|
||||||
case oss.ServiceError:
|
case oss.ServiceError:
|
||||||
apiErr = APIError{
|
apiErr = APIError{
|
||||||
|
@ -25,15 +25,17 @@ import (
|
|||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
|
"io/ioutil"
|
||||||
"net/http"
|
"net/http"
|
||||||
|
"net/url"
|
||||||
"path"
|
"path"
|
||||||
"sort"
|
"sort"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/Azure/azure-sdk-for-go/storage"
|
"github.com/Azure/azure-pipeline-go/pipeline"
|
||||||
"github.com/Azure/go-autorest/autorest/azure"
|
"github.com/Azure/azure-storage-blob-go/azblob"
|
||||||
humanize "github.com/dustin/go-humanize"
|
humanize "github.com/dustin/go-humanize"
|
||||||
"github.com/minio/cli"
|
"github.com/minio/cli"
|
||||||
miniogopolicy "github.com/minio/minio-go/v6/pkg/policy"
|
miniogopolicy "github.com/minio/minio-go/v6/pkg/policy"
|
||||||
@ -48,7 +50,17 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
globalAzureAPIVersion = "2016-05-31"
|
// The defaultDialTimeout for communicating with the cloud backends is set
|
||||||
|
// to 30 seconds in utils.go; the Azure SDK recommends to set a timeout of 60
|
||||||
|
// seconds per MB of data a client expects to upload so we must transfer less
|
||||||
|
// than 0.5 MB per chunk to stay within the defaultDialTimeout tolerance.
|
||||||
|
// See https://github.com/Azure/azure-storage-blob-go/blob/fc70003/azblob/zc_policy_retry.go#L39-L44 for more details.
|
||||||
|
azureUploadChunkSize = 0.25 * humanize.MiByte
|
||||||
|
azureSdkTimeout = (azureUploadChunkSize / humanize.MiByte) * 60 * time.Second
|
||||||
|
azureUploadMaxMemoryUsage = 10 * humanize.MiByte
|
||||||
|
azureUploadConcurrency = azureUploadMaxMemoryUsage / azureUploadChunkSize
|
||||||
|
|
||||||
|
azureDownloadRetryAttempts = 5
|
||||||
azureBlockSize = 100 * humanize.MiByte
|
azureBlockSize = 100 * humanize.MiByte
|
||||||
azureS3MinPartSize = 5 * humanize.MiByte
|
azureS3MinPartSize = 5 * humanize.MiByte
|
||||||
metadataObjectNameTemplate = minio.GatewayMinioSysTmp + "multipart/v1/%s.%x/azure.json"
|
metadataObjectNameTemplate = minio.GatewayMinioSysTmp + "multipart/v1/%s.%x/azure.json"
|
||||||
@ -144,53 +156,76 @@ func (g *Azure) Name() string {
|
|||||||
return azureBackend
|
return azureBackend
|
||||||
}
|
}
|
||||||
|
|
||||||
// All known cloud environments of Azure
|
|
||||||
var azureEnvs = []azure.Environment{
|
|
||||||
azure.PublicCloud,
|
|
||||||
azure.USGovernmentCloud,
|
|
||||||
azure.ChinaCloud,
|
|
||||||
azure.GermanCloud,
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewGatewayLayer initializes azure blob storage client and returns AzureObjects.
|
// NewGatewayLayer initializes azure blob storage client and returns AzureObjects.
|
||||||
func (g *Azure) NewGatewayLayer(creds auth.Credentials) (minio.ObjectLayer, error) {
|
func (g *Azure) NewGatewayLayer(creds auth.Credentials) (minio.ObjectLayer, error) {
|
||||||
var err error
|
endpointURL, err := parseStorageEndpoint(g.host, creds.AccessKey)
|
||||||
// The default endpoint is the public cloud
|
|
||||||
var endpoint = azure.PublicCloud.StorageEndpointSuffix
|
|
||||||
var secure = true
|
|
||||||
|
|
||||||
// Load the endpoint url if supplied by the user.
|
|
||||||
if g.host != "" {
|
|
||||||
endpoint, secure, err = minio.ParseGatewayEndpoint(g.host)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
// Reformat the full account storage endpoint to the base format.
|
|
||||||
// e.g. testazure.blob.core.windows.net => core.windows.net
|
|
||||||
endpoint = strings.ToLower(endpoint)
|
|
||||||
for _, env := range azureEnvs {
|
|
||||||
if strings.Contains(endpoint, env.StorageEndpointSuffix) {
|
|
||||||
endpoint = env.StorageEndpointSuffix
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
c, err := storage.NewClient(creds.AccessKey, creds.SecretKey, endpoint, globalAzureAPIVersion, secure)
|
credential, err := azblob.NewSharedKeyCredential(creds.AccessKey, creds.SecretKey)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return &azureObjects{}, err
|
return &azureObjects{}, err
|
||||||
}
|
}
|
||||||
|
|
||||||
c.AddToUserAgent(fmt.Sprintf("APN/1.0 MinIO/1.0 MinIO/%s", minio.Version))
|
httpClient := &http.Client{Transport: minio.NewCustomHTTPTransport()}
|
||||||
c.HTTPClient = &http.Client{Transport: minio.NewCustomHTTPTransport()}
|
userAgent := fmt.Sprintf("APN/1.0 MinIO/1.0 MinIO/%s", minio.Version)
|
||||||
|
|
||||||
|
pipeline := azblob.NewPipeline(credential, azblob.PipelineOptions{
|
||||||
|
Retry: azblob.RetryOptions{
|
||||||
|
TryTimeout: azureSdkTimeout,
|
||||||
|
},
|
||||||
|
HTTPSender: pipeline.FactoryFunc(func(next pipeline.Policy, po *pipeline.PolicyOptions) pipeline.PolicyFunc {
|
||||||
|
return func(ctx context.Context, request pipeline.Request) (pipeline.Response, error) {
|
||||||
|
request.Header.Set("User-Agent", userAgent)
|
||||||
|
resp, err := httpClient.Do(request.WithContext(ctx))
|
||||||
|
return pipeline.NewHTTPResponse(resp), err
|
||||||
|
}
|
||||||
|
}),
|
||||||
|
})
|
||||||
|
|
||||||
|
client := azblob.NewServiceURL(*endpointURL, pipeline)
|
||||||
|
|
||||||
return &azureObjects{
|
return &azureObjects{
|
||||||
endpoint: fmt.Sprintf("https://%s.blob.core.windows.net", creds.AccessKey),
|
endpoint: endpointURL.String(),
|
||||||
httpClient: c.HTTPClient,
|
httpClient: httpClient,
|
||||||
client: c.GetBlobService(),
|
client: client,
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func parseStorageEndpoint(host string, accountName string) (*url.URL, error) {
|
||||||
|
var endpoint string
|
||||||
|
|
||||||
|
// Load the endpoint url if supplied by the user.
|
||||||
|
if host != "" {
|
||||||
|
host, secure, err := minio.ParseGatewayEndpoint(host)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
var protocol string
|
||||||
|
if secure {
|
||||||
|
protocol = "https"
|
||||||
|
} else {
|
||||||
|
protocol = "http"
|
||||||
|
}
|
||||||
|
|
||||||
|
// for containerized storage deployments like Azurite or IoT Edge Storage,
|
||||||
|
// account resolution isn't handled via a hostname prefix like
|
||||||
|
// `http://${account}.host/${path}` but instead via a route prefix like
|
||||||
|
// `http://host/${account}/${path}` so adjusting for that here
|
||||||
|
if !strings.HasPrefix(host, fmt.Sprintf("%s.", accountName)) {
|
||||||
|
host = fmt.Sprintf("%s/%s", host, accountName)
|
||||||
|
}
|
||||||
|
|
||||||
|
endpoint = fmt.Sprintf("%s://%s", protocol, host)
|
||||||
|
} else {
|
||||||
|
endpoint = fmt.Sprintf("https://%s.blob.core.windows.net", accountName)
|
||||||
|
}
|
||||||
|
|
||||||
|
return url.Parse(endpoint)
|
||||||
|
}
|
||||||
|
|
||||||
// Production - Azure gateway is production ready.
|
// Production - Azure gateway is production ready.
|
||||||
func (g *Azure) Production() bool {
|
func (g *Azure) Production() bool {
|
||||||
return true
|
return true
|
||||||
@ -210,11 +245,10 @@ func (g *Azure) Production() bool {
|
|||||||
// copied into BlobProperties.
|
// copied into BlobProperties.
|
||||||
//
|
//
|
||||||
// Header names are canonicalized as in http.Header.
|
// Header names are canonicalized as in http.Header.
|
||||||
func s3MetaToAzureProperties(ctx context.Context, s3Metadata map[string]string) (storage.BlobMetadata,
|
func s3MetaToAzureProperties(ctx context.Context, s3Metadata map[string]string) (azblob.Metadata, azblob.BlobHTTPHeaders, error) {
|
||||||
storage.BlobProperties, error) {
|
|
||||||
for k := range s3Metadata {
|
for k := range s3Metadata {
|
||||||
if strings.Contains(k, "--") {
|
if strings.Contains(k, "--") {
|
||||||
return storage.BlobMetadata{}, storage.BlobProperties{}, minio.UnsupportedMetadata{}
|
return azblob.Metadata{}, azblob.BlobHTTPHeaders{}, minio.UnsupportedMetadata{}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -232,8 +266,9 @@ func s3MetaToAzureProperties(ctx context.Context, s3Metadata map[string]string)
|
|||||||
}
|
}
|
||||||
return strings.Join(tokens, "__")
|
return strings.Join(tokens, "__")
|
||||||
}
|
}
|
||||||
var blobMeta storage.BlobMetadata = make(map[string]string)
|
var blobMeta azblob.Metadata = make(map[string]string)
|
||||||
var props storage.BlobProperties
|
var err error
|
||||||
|
var props azblob.BlobHTTPHeaders
|
||||||
for k, v := range s3Metadata {
|
for k, v := range s3Metadata {
|
||||||
k = http.CanonicalHeaderKey(k)
|
k = http.CanonicalHeaderKey(k)
|
||||||
switch {
|
switch {
|
||||||
@ -253,18 +288,15 @@ func s3MetaToAzureProperties(ctx context.Context, s3Metadata map[string]string)
|
|||||||
props.ContentDisposition = v
|
props.ContentDisposition = v
|
||||||
case k == "Content-Encoding":
|
case k == "Content-Encoding":
|
||||||
props.ContentEncoding = v
|
props.ContentEncoding = v
|
||||||
case k == "Content-Length":
|
|
||||||
// assume this doesn't fail
|
|
||||||
props.ContentLength, _ = strconv.ParseInt(v, 10, 64)
|
|
||||||
case k == "Content-Md5":
|
case k == "Content-Md5":
|
||||||
props.ContentMD5 = v
|
props.ContentMD5, err = base64.StdEncoding.DecodeString(v)
|
||||||
case k == "Content-Type":
|
case k == "Content-Type":
|
||||||
props.ContentType = v
|
props.ContentType = v
|
||||||
case k == "Content-Language":
|
case k == "Content-Language":
|
||||||
props.ContentLanguage = v
|
props.ContentLanguage = v
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return blobMeta, props, nil
|
return blobMeta, props, err
|
||||||
}
|
}
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@ -291,7 +323,7 @@ func newPartMetaV1(uploadID string, partID int) (partMeta *partMetadataV1) {
|
|||||||
// metadata. It is the reverse of s3MetaToAzureProperties. Azure's
|
// metadata. It is the reverse of s3MetaToAzureProperties. Azure's
|
||||||
// `.GetMetadata()` lower-cases all header keys, so this is taken into
|
// `.GetMetadata()` lower-cases all header keys, so this is taken into
|
||||||
// account by this function.
|
// account by this function.
|
||||||
func azurePropertiesToS3Meta(meta storage.BlobMetadata, props storage.BlobProperties) map[string]string {
|
func azurePropertiesToS3Meta(meta azblob.Metadata, props azblob.BlobHTTPHeaders, contentLength int64) map[string]string {
|
||||||
// Decoding technique for each key is used here is as follows
|
// Decoding technique for each key is used here is as follows
|
||||||
// Each '_' is converted to '-'
|
// Each '_' is converted to '-'
|
||||||
// Each '__' is converted to '_'
|
// Each '__' is converted to '_'
|
||||||
@ -327,11 +359,11 @@ func azurePropertiesToS3Meta(meta storage.BlobMetadata, props storage.BlobProper
|
|||||||
if props.ContentEncoding != "" {
|
if props.ContentEncoding != "" {
|
||||||
s3Metadata["Content-Encoding"] = props.ContentEncoding
|
s3Metadata["Content-Encoding"] = props.ContentEncoding
|
||||||
}
|
}
|
||||||
if props.ContentLength != 0 {
|
if contentLength != 0 {
|
||||||
s3Metadata["Content-Length"] = fmt.Sprintf("%d", props.ContentLength)
|
s3Metadata["Content-Length"] = fmt.Sprintf("%d", contentLength)
|
||||||
}
|
}
|
||||||
if props.ContentMD5 != "" {
|
if len(props.ContentMD5) != 0 {
|
||||||
s3Metadata["Content-MD5"] = props.ContentMD5
|
s3Metadata["Content-MD5"] = base64.StdEncoding.EncodeToString(props.ContentMD5)
|
||||||
}
|
}
|
||||||
if props.ContentType != "" {
|
if props.ContentType != "" {
|
||||||
s3Metadata["Content-Type"] = props.ContentType
|
s3Metadata["Content-Type"] = props.ContentType
|
||||||
@ -347,7 +379,7 @@ type azureObjects struct {
|
|||||||
minio.GatewayUnsupported
|
minio.GatewayUnsupported
|
||||||
endpoint string
|
endpoint string
|
||||||
httpClient *http.Client
|
httpClient *http.Client
|
||||||
client storage.BlobStorageClient // Azure sdk client
|
client azblob.ServiceURL // Azure sdk client
|
||||||
}
|
}
|
||||||
|
|
||||||
// Convert azure errors to minio object layer errors.
|
// Convert azure errors to minio object layer errors.
|
||||||
@ -365,14 +397,21 @@ func azureToObjectError(err error, params ...string) error {
|
|||||||
object = params[1]
|
object = params[1]
|
||||||
}
|
}
|
||||||
|
|
||||||
azureErr, ok := err.(storage.AzureStorageServiceError)
|
azureErr, ok := err.(azblob.StorageError)
|
||||||
if !ok {
|
if !ok {
|
||||||
// We don't interpret non Azure errors. As azure errors will
|
// We don't interpret non Azure errors. As azure errors will
|
||||||
// have StatusCode to help to convert to object errors.
|
// have StatusCode to help to convert to object errors.
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
switch azureErr.Code {
|
serviceCode := string(azureErr.ServiceCode())
|
||||||
|
statusCode := azureErr.Response().StatusCode
|
||||||
|
|
||||||
|
return azureCodesToObjectError(err, serviceCode, statusCode, bucket, object)
|
||||||
|
}
|
||||||
|
|
||||||
|
func azureCodesToObjectError(err error, serviceCode string, statusCode int, bucket string, object string) error {
|
||||||
|
switch serviceCode {
|
||||||
case "ContainerAlreadyExists":
|
case "ContainerAlreadyExists":
|
||||||
err = minio.BucketExists{Bucket: bucket}
|
err = minio.BucketExists{Bucket: bucket}
|
||||||
case "InvalidResourceName":
|
case "InvalidResourceName":
|
||||||
@ -382,7 +421,7 @@ func azureToObjectError(err error, params ...string) error {
|
|||||||
case "InvalidMetadata":
|
case "InvalidMetadata":
|
||||||
err = minio.UnsupportedMetadata{}
|
err = minio.UnsupportedMetadata{}
|
||||||
default:
|
default:
|
||||||
switch azureErr.StatusCode {
|
switch statusCode {
|
||||||
case http.StatusNotFound:
|
case http.StatusNotFound:
|
||||||
if object != "" {
|
if object != "" {
|
||||||
err = minio.ObjectNotFound{
|
err = minio.ObjectNotFound{
|
||||||
@ -466,10 +505,8 @@ func (a *azureObjects) MakeBucketWithLocation(ctx context.Context, bucket, locat
|
|||||||
return minio.BucketNameInvalid{Bucket: bucket}
|
return minio.BucketNameInvalid{Bucket: bucket}
|
||||||
}
|
}
|
||||||
|
|
||||||
container := a.client.GetContainerReference(bucket)
|
containerURL := a.client.NewContainerURL(bucket)
|
||||||
err := container.Create(&storage.CreateContainerOptions{
|
_, err := containerURL.Create(ctx, azblob.Metadata{}, azblob.PublicAccessNone)
|
||||||
Access: storage.ContainerAccessTypePrivate,
|
|
||||||
})
|
|
||||||
return azureToObjectError(err, bucket)
|
return azureToObjectError(err, bucket)
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -477,55 +514,66 @@ func (a *azureObjects) MakeBucketWithLocation(ctx context.Context, bucket, locat
|
|||||||
func (a *azureObjects) GetBucketInfo(ctx context.Context, bucket string) (bi minio.BucketInfo, e error) {
|
func (a *azureObjects) GetBucketInfo(ctx context.Context, bucket string) (bi minio.BucketInfo, e error) {
|
||||||
// Azure does not have an equivalent call, hence use
|
// Azure does not have an equivalent call, hence use
|
||||||
// ListContainers with prefix
|
// ListContainers with prefix
|
||||||
resp, err := a.client.ListContainers(storage.ListContainersParameters{
|
|
||||||
|
marker := azblob.Marker{}
|
||||||
|
|
||||||
|
for marker.NotDone() {
|
||||||
|
resp, err := a.client.ListContainersSegment(ctx, marker, azblob.ListContainersSegmentOptions{
|
||||||
Prefix: bucket,
|
Prefix: bucket,
|
||||||
})
|
})
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return bi, azureToObjectError(err, bucket)
|
return bi, azureToObjectError(err, bucket)
|
||||||
}
|
}
|
||||||
for _, container := range resp.Containers {
|
|
||||||
|
for _, container := range resp.ContainerItems {
|
||||||
if container.Name == bucket {
|
if container.Name == bucket {
|
||||||
t, e := time.Parse(time.RFC1123, container.Properties.LastModified)
|
t := container.Properties.LastModified
|
||||||
if e == nil {
|
|
||||||
return minio.BucketInfo{
|
return minio.BucketInfo{
|
||||||
Name: bucket,
|
Name: bucket,
|
||||||
Created: t,
|
Created: t,
|
||||||
}, nil
|
}, nil
|
||||||
} // else continue
|
} // else continue
|
||||||
}
|
}
|
||||||
|
|
||||||
|
marker = resp.NextMarker
|
||||||
}
|
}
|
||||||
return bi, minio.BucketNotFound{Bucket: bucket}
|
return bi, minio.BucketNotFound{Bucket: bucket}
|
||||||
}
|
}
|
||||||
|
|
||||||
// ListBuckets - Lists all azure containers, uses Azure equivalent ListContainers.
|
// ListBuckets - Lists all azure containers, uses Azure equivalent `ServiceURL.ListContainersSegment`.
|
||||||
func (a *azureObjects) ListBuckets(ctx context.Context) (buckets []minio.BucketInfo, err error) {
|
func (a *azureObjects) ListBuckets(ctx context.Context) (buckets []minio.BucketInfo, err error) {
|
||||||
resp, err := a.client.ListContainers(storage.ListContainersParameters{})
|
marker := azblob.Marker{}
|
||||||
|
|
||||||
|
for marker.NotDone() {
|
||||||
|
resp, err := a.client.ListContainersSegment(ctx, marker, azblob.ListContainersSegmentOptions{})
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, azureToObjectError(err)
|
return nil, azureToObjectError(err)
|
||||||
}
|
}
|
||||||
for _, container := range resp.Containers {
|
|
||||||
t, e := time.Parse(time.RFC1123, container.Properties.LastModified)
|
for _, container := range resp.ContainerItems {
|
||||||
if e != nil {
|
t := container.Properties.LastModified
|
||||||
logger.LogIf(ctx, e)
|
|
||||||
return nil, e
|
|
||||||
}
|
|
||||||
buckets = append(buckets, minio.BucketInfo{
|
buckets = append(buckets, minio.BucketInfo{
|
||||||
Name: container.Name,
|
Name: container.Name,
|
||||||
Created: t,
|
Created: t,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
marker = resp.NextMarker
|
||||||
|
}
|
||||||
return buckets, nil
|
return buckets, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// DeleteBucket - delete a container on azure, uses Azure equivalent DeleteContainer.
|
// DeleteBucket - delete a container on azure, uses Azure equivalent `ContainerURL.Delete`.
|
||||||
func (a *azureObjects) DeleteBucket(ctx context.Context, bucket string) error {
|
func (a *azureObjects) DeleteBucket(ctx context.Context, bucket string) error {
|
||||||
container := a.client.GetContainerReference(bucket)
|
containerURL := a.client.NewContainerURL(bucket)
|
||||||
err := container.Delete(nil)
|
_, err := containerURL.Delete(ctx, azblob.ContainerAccessConditions{})
|
||||||
return azureToObjectError(err, bucket)
|
return azureToObjectError(err, bucket)
|
||||||
}
|
}
|
||||||
|
|
||||||
// ListObjects - lists all blobs on azure with in a container filtered by prefix
|
// ListObjects - lists all blobs on azure with in a container filtered by prefix
|
||||||
// and marker, uses Azure equivalent ListBlobs.
|
// and marker, uses Azure equivalent `ContainerURL.ListBlobsHierarchySegment`.
|
||||||
// To accommodate S3-compatible applications using
|
// To accommodate S3-compatible applications using
|
||||||
// ListObjectsV1 to use object keys as markers to control the
|
// ListObjectsV1 to use object keys as markers to control the
|
||||||
// listing of objects, we use the following encoding scheme to
|
// listing of objects, we use the following encoding scheme to
|
||||||
@ -542,26 +590,25 @@ func (a *azureObjects) ListObjects(ctx context.Context, bucket, prefix, marker,
|
|||||||
var objects []minio.ObjectInfo
|
var objects []minio.ObjectInfo
|
||||||
var prefixes []string
|
var prefixes []string
|
||||||
|
|
||||||
azureListMarker := ""
|
azureListMarker := azblob.Marker{}
|
||||||
if isAzureMarker(marker) {
|
if isAzureMarker(marker) {
|
||||||
// If application is using Azure continuation token we should
|
// If application is using Azure continuation token we should
|
||||||
// strip the azureTokenPrefix we added in the previous list response.
|
// strip the azureTokenPrefix we added in the previous list response.
|
||||||
azureListMarker = strings.TrimPrefix(marker, azureMarkerPrefix)
|
azureMarker := strings.TrimPrefix(marker, azureMarkerPrefix)
|
||||||
|
azureListMarker.Val = &azureMarker
|
||||||
}
|
}
|
||||||
|
|
||||||
container := a.client.GetContainerReference(bucket)
|
containerURL := a.client.NewContainerURL(bucket)
|
||||||
for len(objects) == 0 && len(prefixes) == 0 {
|
for len(objects) == 0 && len(prefixes) == 0 {
|
||||||
resp, err := container.ListBlobs(storage.ListBlobsParameters{
|
resp, err := containerURL.ListBlobsHierarchySegment(ctx, azureListMarker, delimiter, azblob.ListBlobsSegmentOptions{
|
||||||
Prefix: prefix,
|
Prefix: prefix,
|
||||||
Marker: azureListMarker,
|
MaxResults: int32(maxKeys),
|
||||||
Delimiter: delimiter,
|
|
||||||
MaxResults: uint(maxKeys),
|
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return result, azureToObjectError(err, bucket, prefix)
|
return result, azureToObjectError(err, bucket, prefix)
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, blob := range resp.Blobs {
|
for _, blob := range resp.Segment.BlobItems {
|
||||||
if delimiter == "" && strings.HasPrefix(blob.Name, minio.GatewayMinioSysTmp) {
|
if delimiter == "" && strings.HasPrefix(blob.Name, minio.GatewayMinioSysTmp) {
|
||||||
// We filter out minio.GatewayMinioSysTmp entries in the recursive listing.
|
// We filter out minio.GatewayMinioSysTmp entries in the recursive listing.
|
||||||
continue
|
continue
|
||||||
@ -582,13 +629,10 @@ func (a *azureObjects) ListObjects(ctx context.Context, bucket, prefix, marker,
|
|||||||
//
|
//
|
||||||
// Some applications depend on this behavior refer https://github.com/minio/minio/issues/6550
|
// Some applications depend on this behavior refer https://github.com/minio/minio/issues/6550
|
||||||
// So we handle it here and make this consistent.
|
// So we handle it here and make this consistent.
|
||||||
etag := minio.ToS3ETag(blob.Properties.Etag)
|
etag := minio.ToS3ETag(string(blob.Properties.Etag))
|
||||||
switch {
|
switch {
|
||||||
case blob.Properties.ContentMD5 != "":
|
case len(blob.Properties.ContentMD5) != 0:
|
||||||
b, err := base64.StdEncoding.DecodeString(blob.Properties.ContentMD5)
|
etag = hex.EncodeToString(blob.Properties.ContentMD5)
|
||||||
if err == nil {
|
|
||||||
etag = hex.EncodeToString(b)
|
|
||||||
}
|
|
||||||
case blob.Metadata["md5sum"] != "":
|
case blob.Metadata["md5sum"] != "":
|
||||||
etag = blob.Metadata["md5sum"]
|
etag = blob.Metadata["md5sum"]
|
||||||
delete(blob.Metadata, "md5sum")
|
delete(blob.Metadata, "md5sum")
|
||||||
@ -597,31 +641,31 @@ func (a *azureObjects) ListObjects(ctx context.Context, bucket, prefix, marker,
|
|||||||
objects = append(objects, minio.ObjectInfo{
|
objects = append(objects, minio.ObjectInfo{
|
||||||
Bucket: bucket,
|
Bucket: bucket,
|
||||||
Name: blob.Name,
|
Name: blob.Name,
|
||||||
ModTime: time.Time(blob.Properties.LastModified),
|
ModTime: blob.Properties.LastModified,
|
||||||
Size: blob.Properties.ContentLength,
|
Size: *blob.Properties.ContentLength,
|
||||||
ETag: etag,
|
ETag: etag,
|
||||||
ContentType: blob.Properties.ContentType,
|
ContentType: *blob.Properties.ContentType,
|
||||||
ContentEncoding: blob.Properties.ContentEncoding,
|
ContentEncoding: *blob.Properties.ContentEncoding,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, blobPrefix := range resp.BlobPrefixes {
|
for _, blobPrefix := range resp.Segment.BlobPrefixes {
|
||||||
if blobPrefix == minio.GatewayMinioSysTmp {
|
if blobPrefix.Name == minio.GatewayMinioSysTmp {
|
||||||
// We don't do strings.HasPrefix(blob.Name, minio.GatewayMinioSysTmp) here so that
|
// We don't do strings.HasPrefix(blob.Name, minio.GatewayMinioSysTmp) here so that
|
||||||
// we can use tools like mc to inspect the contents of minio.sys.tmp/
|
// we can use tools like mc to inspect the contents of minio.sys.tmp/
|
||||||
// It is OK to allow listing of minio.sys.tmp/ in non-recursive mode as it aids in debugging.
|
// It is OK to allow listing of minio.sys.tmp/ in non-recursive mode as it aids in debugging.
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
if !isAzureMarker(marker) && blobPrefix <= marker {
|
if !isAzureMarker(marker) && blobPrefix.Name <= marker {
|
||||||
// If the application used ListObjectsV1 style marker then we
|
// If the application used ListObjectsV1 style marker then we
|
||||||
// skip all the entries till we reach the marker.
|
// skip all the entries till we reach the marker.
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
prefixes = append(prefixes, blobPrefix)
|
prefixes = append(prefixes, blobPrefix.Name)
|
||||||
}
|
}
|
||||||
|
|
||||||
azureListMarker = resp.NextMarker
|
azureListMarker = resp.NextMarker
|
||||||
if azureListMarker == "" {
|
if !azureListMarker.NotDone() {
|
||||||
// Reached end of listing.
|
// Reached end of listing.
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
@ -629,10 +673,10 @@ func (a *azureObjects) ListObjects(ctx context.Context, bucket, prefix, marker,
|
|||||||
|
|
||||||
result.Objects = objects
|
result.Objects = objects
|
||||||
result.Prefixes = prefixes
|
result.Prefixes = prefixes
|
||||||
if azureListMarker != "" {
|
if azureListMarker.NotDone() {
|
||||||
// We add the {minio} prefix so that we know in the subsequent request that this
|
// We add the {minio} prefix so that we know in the subsequent request that this
|
||||||
// marker is a azure continuation token and not ListObjectV1 marker.
|
// marker is a azure continuation token and not ListObjectV1 marker.
|
||||||
result.NextMarker = azureMarkerPrefix + azureListMarker
|
result.NextMarker = azureMarkerPrefix + *azureListMarker.Val
|
||||||
result.IsTruncated = true
|
result.IsTruncated = true
|
||||||
}
|
}
|
||||||
return result, nil
|
return result, nil
|
||||||
@ -696,34 +740,24 @@ func (a *azureObjects) GetObject(ctx context.Context, bucket, object string, sta
|
|||||||
return azureToObjectError(minio.InvalidRange{}, bucket, object)
|
return azureToObjectError(minio.InvalidRange{}, bucket, object)
|
||||||
}
|
}
|
||||||
|
|
||||||
blobRange := &storage.BlobRange{Start: uint64(startOffset)}
|
blobURL := a.client.NewContainerURL(bucket).NewBlobURL(object)
|
||||||
if length > 0 {
|
blob, err := blobURL.Download(ctx, startOffset, length, azblob.BlobAccessConditions{}, false)
|
||||||
blobRange.End = uint64(startOffset + length - 1)
|
|
||||||
}
|
|
||||||
|
|
||||||
blob := a.client.GetContainerReference(bucket).GetBlobReference(object)
|
|
||||||
var rc io.ReadCloser
|
|
||||||
var err error
|
|
||||||
if startOffset == 0 && length == 0 {
|
|
||||||
rc, err = blob.Get(nil)
|
|
||||||
} else {
|
|
||||||
rc, err = blob.GetRange(&storage.GetBlobRangeOptions{
|
|
||||||
Range: blobRange,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return azureToObjectError(err, bucket, object)
|
return azureToObjectError(err, bucket, object)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
rc := blob.Body(azblob.RetryReaderOptions{MaxRetryRequests: azureDownloadRetryAttempts})
|
||||||
|
|
||||||
_, err = io.Copy(writer, rc)
|
_, err = io.Copy(writer, rc)
|
||||||
rc.Close()
|
rc.Close()
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetObjectInfo - reads blob metadata properties and replies back minio.ObjectInfo,
|
// GetObjectInfo - reads blob metadata properties and replies back minio.ObjectInfo,
|
||||||
// uses zure equivalent GetBlobProperties.
|
// uses Azure equivalent `BlobURL.GetProperties`.
|
||||||
func (a *azureObjects) GetObjectInfo(ctx context.Context, bucket, object string, opts minio.ObjectOptions) (objInfo minio.ObjectInfo, err error) {
|
func (a *azureObjects) GetObjectInfo(ctx context.Context, bucket, object string, opts minio.ObjectOptions) (objInfo minio.ObjectInfo, err error) {
|
||||||
blob := a.client.GetContainerReference(bucket).GetBlobReference(object)
|
blobURL := a.client.NewContainerURL(bucket).NewBlobURL(object)
|
||||||
err = blob.GetProperties(nil)
|
blob, err := blobURL.GetProperties(ctx, azblob.BlobAccessConditions{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return objInfo, azureToObjectError(err, bucket, object)
|
return objInfo, azureToObjectError(err, bucket, object)
|
||||||
}
|
}
|
||||||
@ -739,89 +773,57 @@ func (a *azureObjects) GetObjectInfo(ctx context.Context, bucket, object string,
|
|||||||
//
|
//
|
||||||
// Some applications depend on this behavior refer https://github.com/minio/minio/issues/6550
|
// Some applications depend on this behavior refer https://github.com/minio/minio/issues/6550
|
||||||
// So we handle it here and make this consistent.
|
// So we handle it here and make this consistent.
|
||||||
etag := minio.ToS3ETag(blob.Properties.Etag)
|
etag := minio.ToS3ETag(string(blob.ETag()))
|
||||||
|
metadata := blob.NewMetadata()
|
||||||
|
contentMD5 := blob.ContentMD5()
|
||||||
switch {
|
switch {
|
||||||
case blob.Properties.ContentMD5 != "":
|
case len(contentMD5) != 0:
|
||||||
b, err := base64.StdEncoding.DecodeString(blob.Properties.ContentMD5)
|
etag = hex.EncodeToString(contentMD5)
|
||||||
if err == nil {
|
case metadata["md5sum"] != "":
|
||||||
etag = hex.EncodeToString(b)
|
etag = metadata["md5sum"]
|
||||||
}
|
delete(metadata, "md5sum")
|
||||||
case blob.Metadata["md5sum"] != "":
|
|
||||||
etag = blob.Metadata["md5sum"]
|
|
||||||
delete(blob.Metadata, "md5sum")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return minio.ObjectInfo{
|
return minio.ObjectInfo{
|
||||||
Bucket: bucket,
|
Bucket: bucket,
|
||||||
UserDefined: azurePropertiesToS3Meta(blob.Metadata, blob.Properties),
|
UserDefined: azurePropertiesToS3Meta(metadata, blob.NewHTTPHeaders(), blob.ContentLength()),
|
||||||
ETag: etag,
|
ETag: etag,
|
||||||
ModTime: time.Time(blob.Properties.LastModified),
|
ModTime: blob.LastModified(),
|
||||||
Name: object,
|
Name: object,
|
||||||
Size: blob.Properties.ContentLength,
|
Size: blob.ContentLength(),
|
||||||
ContentType: blob.Properties.ContentType,
|
ContentType: blob.ContentType(),
|
||||||
ContentEncoding: blob.Properties.ContentEncoding,
|
ContentEncoding: blob.ContentEncoding(),
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// PutObject - Create a new blob with the incoming data,
|
// PutObject - Create a new blob with the incoming data,
|
||||||
// uses Azure equivalent CreateBlockBlobFromReader.
|
// uses Azure equivalent `UploadStreamToBlockBlob`.
|
||||||
func (a *azureObjects) PutObject(ctx context.Context, bucket, object string, r *minio.PutObjReader, opts minio.ObjectOptions) (objInfo minio.ObjectInfo, err error) {
|
func (a *azureObjects) PutObject(ctx context.Context, bucket, object string, r *minio.PutObjReader, opts minio.ObjectOptions) (objInfo minio.ObjectInfo, err error) {
|
||||||
data := r.Reader
|
data := r.Reader
|
||||||
if data.Size() <= azureBlockSize/2 {
|
|
||||||
blob := a.client.GetContainerReference(bucket).GetBlobReference(object)
|
|
||||||
blob.Metadata, blob.Properties, err = s3MetaToAzureProperties(ctx, opts.UserDefined)
|
|
||||||
if err != nil {
|
|
||||||
return objInfo, azureToObjectError(err, bucket, object)
|
|
||||||
}
|
|
||||||
if err = blob.CreateBlockBlobFromReader(data, nil); err != nil {
|
|
||||||
return objInfo, azureToObjectError(err, bucket, object)
|
|
||||||
}
|
|
||||||
return a.GetObjectInfo(ctx, bucket, object, opts)
|
|
||||||
}
|
|
||||||
|
|
||||||
blob := a.client.GetContainerReference(bucket).GetBlobReference(object)
|
|
||||||
var blocks []storage.Block
|
|
||||||
subPartSize, subPartNumber := int64(azureBlockSize), 1
|
|
||||||
for remainingSize := data.Size(); remainingSize >= 0; remainingSize -= subPartSize {
|
|
||||||
// Allow to create zero sized part.
|
|
||||||
if remainingSize == 0 && subPartNumber > 1 {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
if remainingSize < subPartSize {
|
|
||||||
subPartSize = remainingSize
|
|
||||||
}
|
|
||||||
|
|
||||||
id := base64.StdEncoding.EncodeToString([]byte(minio.MustGetUUID()))
|
|
||||||
err = blob.PutBlockWithLength(id, uint64(subPartSize), io.LimitReader(data, subPartSize), nil)
|
|
||||||
if err != nil {
|
|
||||||
return objInfo, azureToObjectError(err, bucket, object)
|
|
||||||
}
|
|
||||||
blocks = append(blocks, storage.Block{
|
|
||||||
ID: id,
|
|
||||||
Status: storage.BlockStatusUncommitted,
|
|
||||||
})
|
|
||||||
subPartNumber++
|
|
||||||
}
|
|
||||||
|
|
||||||
if err = blob.PutBlockList(blocks, nil); err != nil {
|
|
||||||
return objInfo, azureToObjectError(err, bucket, object)
|
|
||||||
}
|
|
||||||
|
|
||||||
|
if data.Size() > azureBlockSize/2 {
|
||||||
if len(opts.UserDefined) == 0 {
|
if len(opts.UserDefined) == 0 {
|
||||||
opts.UserDefined = map[string]string{}
|
opts.UserDefined = map[string]string{}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Save md5sum for future processing on the object.
|
// Save md5sum for future processing on the object.
|
||||||
opts.UserDefined["x-amz-meta-md5sum"] = r.MD5CurrentHexString()
|
opts.UserDefined["x-amz-meta-md5sum"] = r.MD5CurrentHexString()
|
||||||
blob.Metadata, blob.Properties, err = s3MetaToAzureProperties(ctx, opts.UserDefined)
|
}
|
||||||
|
|
||||||
|
metadata, properties, err := s3MetaToAzureProperties(ctx, opts.UserDefined)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return objInfo, azureToObjectError(err, bucket, object)
|
return objInfo, azureToObjectError(err, bucket, object)
|
||||||
}
|
}
|
||||||
if err = blob.SetProperties(nil); err != nil {
|
|
||||||
return objInfo, azureToObjectError(err, bucket, object)
|
blobURL := a.client.NewContainerURL(bucket).NewBlockBlobURL(object)
|
||||||
}
|
|
||||||
if err = blob.SetMetadata(nil); err != nil {
|
_, err = azblob.UploadStreamToBlockBlob(ctx, data, blobURL, azblob.UploadStreamToBlockBlobOptions{
|
||||||
|
BufferSize: azureUploadChunkSize,
|
||||||
|
MaxBuffers: azureUploadConcurrency,
|
||||||
|
BlobHTTPHeaders: properties,
|
||||||
|
Metadata: metadata,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
return objInfo, azureToObjectError(err, bucket, object)
|
return objInfo, azureToObjectError(err, bucket, object)
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -829,34 +831,44 @@ func (a *azureObjects) PutObject(ctx context.Context, bucket, object string, r *
|
|||||||
}
|
}
|
||||||
|
|
||||||
// CopyObject - Copies a blob from source container to destination container.
|
// CopyObject - Copies a blob from source container to destination container.
|
||||||
// Uses Azure equivalent CopyBlob API.
|
// Uses Azure equivalent `BlobURL.StartCopyFromURL`.
|
||||||
func (a *azureObjects) CopyObject(ctx context.Context, srcBucket, srcObject, destBucket, destObject string, srcInfo minio.ObjectInfo, srcOpts, dstOpts minio.ObjectOptions) (objInfo minio.ObjectInfo, err error) {
|
func (a *azureObjects) CopyObject(ctx context.Context, srcBucket, srcObject, destBucket, destObject string, srcInfo minio.ObjectInfo, srcOpts, dstOpts minio.ObjectOptions) (objInfo minio.ObjectInfo, err error) {
|
||||||
if srcOpts.CheckCopyPrecondFn != nil && srcOpts.CheckCopyPrecondFn(srcInfo, "") {
|
if srcOpts.CheckCopyPrecondFn != nil && srcOpts.CheckCopyPrecondFn(srcInfo, "") {
|
||||||
return minio.ObjectInfo{}, minio.PreConditionFailed{}
|
return minio.ObjectInfo{}, minio.PreConditionFailed{}
|
||||||
}
|
}
|
||||||
srcBlobURL := a.client.GetContainerReference(srcBucket).GetBlobReference(srcObject).GetURL()
|
srcBlobURL := a.client.NewContainerURL(srcBucket).NewBlobURL(srcObject).URL()
|
||||||
destBlob := a.client.GetContainerReference(destBucket).GetBlobReference(destObject)
|
destBlob := a.client.NewContainerURL(destBucket).NewBlobURL(destObject)
|
||||||
azureMeta, props, err := s3MetaToAzureProperties(ctx, srcInfo.UserDefined)
|
azureMeta, props, err := s3MetaToAzureProperties(ctx, srcInfo.UserDefined)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return objInfo, azureToObjectError(err, srcBucket, srcObject)
|
return objInfo, azureToObjectError(err, srcBucket, srcObject)
|
||||||
}
|
}
|
||||||
destBlob.Metadata = azureMeta
|
res, err := destBlob.StartCopyFromURL(ctx, srcBlobURL, azureMeta, azblob.ModifiedAccessConditions{}, azblob.BlobAccessConditions{})
|
||||||
err = destBlob.Copy(srcBlobURL, nil)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return objInfo, azureToObjectError(err, srcBucket, srcObject)
|
return objInfo, azureToObjectError(err, srcBucket, srcObject)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// StartCopyFromURL is an asynchronous operation so need to poll for completion,
|
||||||
|
// see https://docs.microsoft.com/en-us/rest/api/storageservices/copy-blob#remarks.
|
||||||
|
copyStatus := res.CopyStatus()
|
||||||
|
for copyStatus != azblob.CopyStatusSuccess {
|
||||||
|
destProps, err := destBlob.GetProperties(ctx, azblob.BlobAccessConditions{})
|
||||||
|
if err != nil {
|
||||||
|
return objInfo, azureToObjectError(err, srcBucket, srcObject)
|
||||||
|
}
|
||||||
|
copyStatus = destProps.CopyStatus()
|
||||||
|
}
|
||||||
|
|
||||||
// Azure will copy metadata from the source object when an empty metadata map is provided.
|
// Azure will copy metadata from the source object when an empty metadata map is provided.
|
||||||
// To handle the case where the source object should be copied without its metadata,
|
// To handle the case where the source object should be copied without its metadata,
|
||||||
// the metadata must be removed from the dest. object after the copy completes
|
// the metadata must be removed from the dest. object after the copy completes
|
||||||
if len(azureMeta) == 0 && len(destBlob.Metadata) != 0 {
|
if len(azureMeta) == 0 {
|
||||||
destBlob.Metadata = azureMeta
|
_, err := destBlob.SetMetadata(ctx, azureMeta, azblob.BlobAccessConditions{})
|
||||||
err = destBlob.SetMetadata(nil)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return objInfo, azureToObjectError(err, srcBucket, srcObject)
|
return objInfo, azureToObjectError(err, srcBucket, srcObject)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
destBlob.Properties = props
|
|
||||||
err = destBlob.SetProperties(nil)
|
_, err = destBlob.SetHTTPHeaders(ctx, props, azblob.BlobAccessConditions{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return objInfo, azureToObjectError(err, srcBucket, srcObject)
|
return objInfo, azureToObjectError(err, srcBucket, srcObject)
|
||||||
}
|
}
|
||||||
@ -864,10 +876,10 @@ func (a *azureObjects) CopyObject(ctx context.Context, srcBucket, srcObject, des
|
|||||||
}
|
}
|
||||||
|
|
||||||
// DeleteObject - Deletes a blob on azure container, uses Azure
|
// DeleteObject - Deletes a blob on azure container, uses Azure
|
||||||
// equivalent DeleteBlob API.
|
// equivalent `BlobURL.Delete`.
|
||||||
func (a *azureObjects) DeleteObject(ctx context.Context, bucket, object string) error {
|
func (a *azureObjects) DeleteObject(ctx context.Context, bucket, object string) error {
|
||||||
blob := a.client.GetContainerReference(bucket).GetBlobReference(object)
|
blob := a.client.NewContainerURL(bucket).NewBlobURL(object)
|
||||||
err := blob.Delete(nil)
|
_, err := blob.Delete(ctx, azblob.DeleteSnapshotsOptionNone, azblob.BlobAccessConditions{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return azureToObjectError(err, bucket, object)
|
return azureToObjectError(err, bucket, object)
|
||||||
}
|
}
|
||||||
@ -909,9 +921,9 @@ func getAzureMetadataPartPrefix(uploadID, objectName string) string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (a *azureObjects) checkUploadIDExists(ctx context.Context, bucketName, objectName, uploadID string) (err error) {
|
func (a *azureObjects) checkUploadIDExists(ctx context.Context, bucketName, objectName, uploadID string) (err error) {
|
||||||
blob := a.client.GetContainerReference(bucketName).GetBlobReference(
|
blobURL := a.client.NewContainerURL(bucketName).NewBlobURL(
|
||||||
getAzureMetadataObjectName(objectName, uploadID))
|
getAzureMetadataObjectName(objectName, uploadID))
|
||||||
err = blob.GetMetadata(nil)
|
_, err = blobURL.GetProperties(ctx, azblob.BlobAccessConditions{})
|
||||||
err = azureToObjectError(err, bucketName, objectName)
|
err = azureToObjectError(err, bucketName, objectName)
|
||||||
oerr := minio.ObjectNotFound{
|
oerr := minio.ObjectNotFound{
|
||||||
Bucket: bucketName,
|
Bucket: bucketName,
|
||||||
@ -925,7 +937,7 @@ func (a *azureObjects) checkUploadIDExists(ctx context.Context, bucketName, obje
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewMultipartUpload - Use Azure equivalent CreateBlockBlob.
|
// NewMultipartUpload - Use Azure equivalent `BlobURL.Upload`.
|
||||||
func (a *azureObjects) NewMultipartUpload(ctx context.Context, bucket, object string, opts minio.ObjectOptions) (uploadID string, err error) {
|
func (a *azureObjects) NewMultipartUpload(ctx context.Context, bucket, object string, opts minio.ObjectOptions) (uploadID string, err error) {
|
||||||
uploadID, err = getAzureUploadID()
|
uploadID, err = getAzureUploadID()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@ -940,8 +952,8 @@ func (a *azureObjects) NewMultipartUpload(ctx context.Context, bucket, object st
|
|||||||
return "", err
|
return "", err
|
||||||
}
|
}
|
||||||
|
|
||||||
blob := a.client.GetContainerReference(bucket).GetBlobReference(metadataObject)
|
blobURL := a.client.NewContainerURL(bucket).NewBlockBlobURL(metadataObject)
|
||||||
err = blob.CreateBlockBlobFromReader(bytes.NewBuffer(jsonData), nil)
|
_, err = blobURL.Upload(ctx, bytes.NewReader(jsonData), azblob.BlobHTTPHeaders{}, azblob.Metadata{}, azblob.BlobAccessConditions{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", azureToObjectError(err, bucket, metadataObject)
|
return "", azureToObjectError(err, bucket, metadataObject)
|
||||||
}
|
}
|
||||||
@ -949,7 +961,7 @@ func (a *azureObjects) NewMultipartUpload(ctx context.Context, bucket, object st
|
|||||||
return uploadID, nil
|
return uploadID, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// PutObjectPart - Use Azure equivalent PutBlockWithLength.
|
// PutObjectPart - Use Azure equivalent `BlobURL.StageBlock`.
|
||||||
func (a *azureObjects) PutObjectPart(ctx context.Context, bucket, object, uploadID string, partID int, r *minio.PutObjReader, opts minio.ObjectOptions) (info minio.PartInfo, err error) {
|
func (a *azureObjects) PutObjectPart(ctx context.Context, bucket, object, uploadID string, partID int, r *minio.PutObjReader, opts minio.ObjectOptions) (info minio.PartInfo, err error) {
|
||||||
data := r.Reader
|
data := r.Reader
|
||||||
if err = a.checkUploadIDExists(ctx, bucket, object, uploadID); err != nil {
|
if err = a.checkUploadIDExists(ctx, bucket, object, uploadID); err != nil {
|
||||||
@ -961,20 +973,19 @@ func (a *azureObjects) PutObjectPart(ctx context.Context, bucket, object, upload
|
|||||||
}
|
}
|
||||||
|
|
||||||
partMetaV1 := newPartMetaV1(uploadID, partID)
|
partMetaV1 := newPartMetaV1(uploadID, partID)
|
||||||
subPartSize, subPartNumber := int64(azureBlockSize), 1
|
subPartSize, subPartNumber := int64(azureUploadChunkSize), 1
|
||||||
for remainingSize := data.Size(); remainingSize >= 0; remainingSize -= subPartSize {
|
for remainingSize := data.Size(); remainingSize > 0; remainingSize -= subPartSize {
|
||||||
// Allow to create zero sized part.
|
|
||||||
if remainingSize == 0 && subPartNumber > 1 {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
if remainingSize < subPartSize {
|
if remainingSize < subPartSize {
|
||||||
subPartSize = remainingSize
|
subPartSize = remainingSize
|
||||||
}
|
}
|
||||||
|
|
||||||
id := base64.StdEncoding.EncodeToString([]byte(minio.MustGetUUID()))
|
id := base64.StdEncoding.EncodeToString([]byte(minio.MustGetUUID()))
|
||||||
blob := a.client.GetContainerReference(bucket).GetBlobReference(object)
|
blobURL := a.client.NewContainerURL(bucket).NewBlockBlobURL(object)
|
||||||
err = blob.PutBlockWithLength(id, uint64(subPartSize), io.LimitReader(data, subPartSize), nil)
|
body, err := ioutil.ReadAll(io.LimitReader(data, subPartSize))
|
||||||
|
if err != nil {
|
||||||
|
return info, azureToObjectError(err, bucket, object)
|
||||||
|
}
|
||||||
|
_, err = blobURL.StageBlock(ctx, id, bytes.NewReader(body), azblob.LeaseAccessConditions{}, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return info, azureToObjectError(err, bucket, object)
|
return info, azureToObjectError(err, bucket, object)
|
||||||
}
|
}
|
||||||
@ -994,8 +1005,8 @@ func (a *azureObjects) PutObjectPart(ctx context.Context, bucket, object, upload
|
|||||||
return info, err
|
return info, err
|
||||||
}
|
}
|
||||||
|
|
||||||
blob := a.client.GetContainerReference(bucket).GetBlobReference(metadataObject)
|
blobURL := a.client.NewContainerURL(bucket).NewBlockBlobURL(metadataObject)
|
||||||
err = blob.CreateBlockBlobFromReader(bytes.NewBuffer(jsonData), nil)
|
_, err = blobURL.Upload(ctx, bytes.NewReader(jsonData), azblob.BlobHTTPHeaders{}, azblob.Metadata{}, azblob.BlobAccessConditions{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return info, azureToObjectError(err, bucket, metadataObject)
|
return info, azureToObjectError(err, bucket, metadataObject)
|
||||||
}
|
}
|
||||||
@ -1007,7 +1018,7 @@ func (a *azureObjects) PutObjectPart(ctx context.Context, bucket, object, upload
|
|||||||
return info, nil
|
return info, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// ListObjectParts - Use Azure equivalent GetBlockList.
|
// ListObjectParts - Use Azure equivalent `ContainerURL.ListBlobsHierarchySegment`.
|
||||||
func (a *azureObjects) ListObjectParts(ctx context.Context, bucket, object, uploadID string, partNumberMarker int, maxParts int, opts minio.ObjectOptions) (result minio.ListPartsInfo, err error) {
|
func (a *azureObjects) ListObjectParts(ctx context.Context, bucket, object, uploadID string, partNumberMarker int, maxParts int, opts minio.ObjectOptions) (result minio.ListPartsInfo, err error) {
|
||||||
if err = a.checkUploadIDExists(ctx, bucket, object, uploadID); err != nil {
|
if err = a.checkUploadIDExists(ctx, bucket, object, uploadID); err != nil {
|
||||||
return result, err
|
return result, err
|
||||||
@ -1018,25 +1029,26 @@ func (a *azureObjects) ListObjectParts(ctx context.Context, bucket, object, uplo
|
|||||||
result.UploadID = uploadID
|
result.UploadID = uploadID
|
||||||
result.MaxParts = maxParts
|
result.MaxParts = maxParts
|
||||||
|
|
||||||
|
azureListMarker := ""
|
||||||
|
marker := azblob.Marker{Val: &azureListMarker}
|
||||||
|
|
||||||
var parts []minio.PartInfo
|
var parts []minio.PartInfo
|
||||||
var marker, delimiter string
|
var delimiter string
|
||||||
maxKeys := maxPartsCount
|
maxKeys := maxPartsCount
|
||||||
if partNumberMarker == 0 {
|
if partNumberMarker == 0 {
|
||||||
maxKeys = maxParts
|
maxKeys = maxParts
|
||||||
}
|
}
|
||||||
prefix := getAzureMetadataPartPrefix(uploadID, object)
|
prefix := getAzureMetadataPartPrefix(uploadID, object)
|
||||||
container := a.client.GetContainerReference(bucket)
|
containerURL := a.client.NewContainerURL(bucket)
|
||||||
resp, err := container.ListBlobs(storage.ListBlobsParameters{
|
resp, err := containerURL.ListBlobsHierarchySegment(ctx, marker, delimiter, azblob.ListBlobsSegmentOptions{
|
||||||
Prefix: prefix,
|
Prefix: prefix,
|
||||||
Marker: marker,
|
MaxResults: int32(maxKeys),
|
||||||
Delimiter: delimiter,
|
|
||||||
MaxResults: uint(maxKeys),
|
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return result, azureToObjectError(err, bucket, prefix)
|
return result, azureToObjectError(err, bucket, prefix)
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, blob := range resp.Blobs {
|
for _, blob := range resp.Segment.BlobItems {
|
||||||
if delimiter == "" && !strings.HasPrefix(blob.Name, minio.GatewayMinioSysTmp) {
|
if delimiter == "" && !strings.HasPrefix(blob.Name, minio.GatewayMinioSysTmp) {
|
||||||
// We filter out non minio.GatewayMinioSysTmp entries in the recursive listing.
|
// We filter out non minio.GatewayMinioSysTmp entries in the recursive listing.
|
||||||
continue
|
continue
|
||||||
@ -1045,7 +1057,7 @@ func (a *azureObjects) ListObjectParts(ctx context.Context, bucket, object, uplo
|
|||||||
if strings.HasSuffix(blob.Name, "azure.json") {
|
if strings.HasSuffix(blob.Name, "azure.json") {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
if !isAzureMarker(marker) && blob.Name <= marker {
|
if !isAzureMarker(*marker.Val) && blob.Name <= *marker.Val {
|
||||||
// If the application used ListObjectsV1 style marker then we
|
// If the application used ListObjectsV1 style marker then we
|
||||||
// skip all the entries till we reach the marker.
|
// skip all the entries till we reach the marker.
|
||||||
continue
|
continue
|
||||||
@ -1055,11 +1067,12 @@ func (a *azureObjects) ListObjectParts(ctx context.Context, bucket, object, uplo
|
|||||||
return result, azureToObjectError(fmt.Errorf("Unexpected error"), bucket, object)
|
return result, azureToObjectError(fmt.Errorf("Unexpected error"), bucket, object)
|
||||||
}
|
}
|
||||||
var metadata partMetadataV1
|
var metadata partMetadataV1
|
||||||
var metadataReader io.Reader
|
blobURL := containerURL.NewBlobURL(blob.Name)
|
||||||
blob := a.client.GetContainerReference(bucket).GetBlobReference(blob.Name)
|
blob, err := blobURL.Download(ctx, 0, azblob.CountToEnd, azblob.BlobAccessConditions{}, false)
|
||||||
if metadataReader, err = blob.Get(nil); err != nil {
|
if err != nil {
|
||||||
return result, azureToObjectError(fmt.Errorf("Unexpected error"), bucket, object)
|
return result, azureToObjectError(fmt.Errorf("Unexpected error"), bucket, object)
|
||||||
}
|
}
|
||||||
|
metadataReader := blob.Body(azblob.RetryReaderOptions{MaxRetryRequests: azureDownloadRetryAttempts})
|
||||||
if err = json.NewDecoder(metadataReader).Decode(&metadata); err != nil {
|
if err = json.NewDecoder(metadataReader).Decode(&metadata); err != nil {
|
||||||
logger.LogIf(ctx, err)
|
logger.LogIf(ctx, err)
|
||||||
return result, azureToObjectError(err, bucket, object)
|
return result, azureToObjectError(err, bucket, object)
|
||||||
@ -1114,9 +1127,9 @@ func (a *azureObjects) AbortMultipartUpload(ctx context.Context, bucket, object,
|
|||||||
break
|
break
|
||||||
}
|
}
|
||||||
for _, part := range lpi.Parts {
|
for _, part := range lpi.Parts {
|
||||||
pblob := a.client.GetContainerReference(bucket).GetBlobReference(
|
pblob := a.client.NewContainerURL(bucket).NewBlobURL(
|
||||||
getAzureMetadataPartName(object, uploadID, part.PartNumber))
|
getAzureMetadataPartName(object, uploadID, part.PartNumber))
|
||||||
pblob.Delete(nil)
|
pblob.Delete(ctx, azblob.DeleteSnapshotsOptionNone, azblob.BlobAccessConditions{})
|
||||||
}
|
}
|
||||||
partNumberMarker = lpi.NextPartNumberMarker
|
partNumberMarker = lpi.NextPartNumberMarker
|
||||||
if !lpi.IsTruncated {
|
if !lpi.IsTruncated {
|
||||||
@ -1124,12 +1137,13 @@ func (a *azureObjects) AbortMultipartUpload(ctx context.Context, bucket, object,
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
blob := a.client.GetContainerReference(bucket).GetBlobReference(
|
blobURL := a.client.NewContainerURL(bucket).NewBlobURL(
|
||||||
getAzureMetadataObjectName(object, uploadID))
|
getAzureMetadataObjectName(object, uploadID))
|
||||||
return blob.Delete(nil)
|
_, err = blobURL.Delete(ctx, azblob.DeleteSnapshotsOptionNone, azblob.BlobAccessConditions{})
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
// CompleteMultipartUpload - Use Azure equivalent PutBlockList.
|
// CompleteMultipartUpload - Use Azure equivalent `BlobURL.CommitBlockList`.
|
||||||
func (a *azureObjects) CompleteMultipartUpload(ctx context.Context, bucket, object, uploadID string, uploadedParts []minio.CompletePart, opts minio.ObjectOptions) (objInfo minio.ObjectInfo, err error) {
|
func (a *azureObjects) CompleteMultipartUpload(ctx context.Context, bucket, object, uploadID string, uploadedParts []minio.CompletePart, opts minio.ObjectOptions) (objInfo minio.ObjectInfo, err error) {
|
||||||
metadataObject := getAzureMetadataObjectName(object, uploadID)
|
metadataObject := getAzureMetadataObjectName(object, uploadID)
|
||||||
if err = a.checkUploadIDExists(ctx, bucket, object, uploadID); err != nil {
|
if err = a.checkUploadIDExists(ctx, bucket, object, uploadID); err != nil {
|
||||||
@ -1140,30 +1154,32 @@ func (a *azureObjects) CompleteMultipartUpload(ctx context.Context, bucket, obje
|
|||||||
return objInfo, err
|
return objInfo, err
|
||||||
}
|
}
|
||||||
|
|
||||||
var metadataReader io.Reader
|
blobURL := a.client.NewContainerURL(bucket).NewBlobURL(metadataObject)
|
||||||
blob := a.client.GetContainerReference(bucket).GetBlobReference(metadataObject)
|
blob, err := blobURL.Download(ctx, 0, azblob.CountToEnd, azblob.BlobAccessConditions{}, false)
|
||||||
if metadataReader, err = blob.Get(nil); err != nil {
|
if err != nil {
|
||||||
return objInfo, azureToObjectError(err, bucket, metadataObject)
|
return objInfo, azureToObjectError(err, bucket, metadataObject)
|
||||||
}
|
}
|
||||||
|
|
||||||
var metadata azureMultipartMetadata
|
var metadata azureMultipartMetadata
|
||||||
|
metadataReader := blob.Body(azblob.RetryReaderOptions{MaxRetryRequests: azureDownloadRetryAttempts})
|
||||||
if err = json.NewDecoder(metadataReader).Decode(&metadata); err != nil {
|
if err = json.NewDecoder(metadataReader).Decode(&metadata); err != nil {
|
||||||
logger.LogIf(ctx, err)
|
logger.LogIf(ctx, err)
|
||||||
return objInfo, azureToObjectError(err, bucket, metadataObject)
|
return objInfo, azureToObjectError(err, bucket, metadataObject)
|
||||||
}
|
}
|
||||||
|
|
||||||
objBlob := a.client.GetContainerReference(bucket).GetBlobReference(object)
|
objBlob := a.client.NewContainerURL(bucket).NewBlockBlobURL(object)
|
||||||
|
|
||||||
var allBlocks []storage.Block
|
var allBlocks []string
|
||||||
for i, part := range uploadedParts {
|
for i, part := range uploadedParts {
|
||||||
var partMetadataReader io.Reader
|
|
||||||
var partMetadata partMetadataV1
|
var partMetadata partMetadataV1
|
||||||
partMetadataObject := getAzureMetadataPartName(object, uploadID, part.PartNumber)
|
partMetadataObject := getAzureMetadataPartName(object, uploadID, part.PartNumber)
|
||||||
pblob := a.client.GetContainerReference(bucket).GetBlobReference(partMetadataObject)
|
pblobURL := a.client.NewContainerURL(bucket).NewBlobURL(partMetadataObject)
|
||||||
if partMetadataReader, err = pblob.Get(nil); err != nil {
|
pblob, err := pblobURL.Download(ctx, 0, azblob.CountToEnd, azblob.BlobAccessConditions{}, false)
|
||||||
|
if err != nil {
|
||||||
return objInfo, azureToObjectError(err, bucket, partMetadataObject)
|
return objInfo, azureToObjectError(err, bucket, partMetadataObject)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
partMetadataReader := pblob.Body(azblob.RetryReaderOptions{MaxRetryRequests: azureDownloadRetryAttempts})
|
||||||
if err = json.NewDecoder(partMetadataReader).Decode(&partMetadata); err != nil {
|
if err = json.NewDecoder(partMetadataReader).Decode(&partMetadata); err != nil {
|
||||||
logger.LogIf(ctx, err)
|
logger.LogIf(ctx, err)
|
||||||
return objInfo, azureToObjectError(err, bucket, partMetadataObject)
|
return objInfo, azureToObjectError(err, bucket, partMetadataObject)
|
||||||
@ -1172,9 +1188,7 @@ func (a *azureObjects) CompleteMultipartUpload(ctx context.Context, bucket, obje
|
|||||||
if partMetadata.ETag != part.ETag {
|
if partMetadata.ETag != part.ETag {
|
||||||
return objInfo, minio.InvalidPart{}
|
return objInfo, minio.InvalidPart{}
|
||||||
}
|
}
|
||||||
for _, blockID := range partMetadata.BlockIDs {
|
allBlocks = append(allBlocks, partMetadata.BlockIDs...)
|
||||||
allBlocks = append(allBlocks, storage.Block{ID: blockID, Status: storage.BlockStatusUncommitted})
|
|
||||||
}
|
|
||||||
if i < (len(uploadedParts)-1) && partMetadata.Size < azureS3MinPartSize {
|
if i < (len(uploadedParts)-1) && partMetadata.Size < azureS3MinPartSize {
|
||||||
return objInfo, minio.PartTooSmall{
|
return objInfo, minio.PartTooSmall{
|
||||||
PartNumber: uploadedParts[i].PartNumber,
|
PartNumber: uploadedParts[i].PartNumber,
|
||||||
@ -1184,20 +1198,13 @@ func (a *azureObjects) CompleteMultipartUpload(ctx context.Context, bucket, obje
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
err = objBlob.PutBlockList(allBlocks, nil)
|
objMetadata, objProperties, err := s3MetaToAzureProperties(ctx, metadata.Metadata)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return objInfo, azureToObjectError(err, bucket, object)
|
return objInfo, azureToObjectError(err, bucket, object)
|
||||||
}
|
}
|
||||||
objBlob.Metadata, objBlob.Properties, err = s3MetaToAzureProperties(ctx, metadata.Metadata)
|
objMetadata["md5sum"] = cmd.ComputeCompleteMultipartMD5(uploadedParts)
|
||||||
if err != nil {
|
|
||||||
return objInfo, azureToObjectError(err, bucket, object)
|
_, err = objBlob.CommitBlockList(ctx, allBlocks, objProperties, objMetadata, azblob.BlobAccessConditions{})
|
||||||
}
|
|
||||||
objBlob.Metadata["md5sum"] = cmd.ComputeCompleteMultipartMD5(uploadedParts)
|
|
||||||
err = objBlob.SetProperties(nil)
|
|
||||||
if err != nil {
|
|
||||||
return objInfo, azureToObjectError(err, bucket, object)
|
|
||||||
}
|
|
||||||
err = objBlob.SetMetadata(nil)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return objInfo, azureToObjectError(err, bucket, object)
|
return objInfo, azureToObjectError(err, bucket, object)
|
||||||
}
|
}
|
||||||
@ -1208,9 +1215,9 @@ func (a *azureObjects) CompleteMultipartUpload(ctx context.Context, bucket, obje
|
|||||||
break
|
break
|
||||||
}
|
}
|
||||||
for _, part := range lpi.Parts {
|
for _, part := range lpi.Parts {
|
||||||
pblob := a.client.GetContainerReference(bucket).GetBlobReference(
|
pblob := a.client.NewContainerURL(bucket).NewBlobURL(
|
||||||
getAzureMetadataPartName(object, uploadID, part.PartNumber))
|
getAzureMetadataPartName(object, uploadID, part.PartNumber))
|
||||||
pblob.Delete(nil)
|
pblob.Delete(ctx, azblob.DeleteSnapshotsOptionNone, azblob.BlobAccessConditions{})
|
||||||
}
|
}
|
||||||
partNumberMarker = lpi.NextPartNumberMarker
|
partNumberMarker = lpi.NextPartNumberMarker
|
||||||
if !lpi.IsTruncated {
|
if !lpi.IsTruncated {
|
||||||
@ -1218,8 +1225,7 @@ func (a *azureObjects) CompleteMultipartUpload(ctx context.Context, bucket, obje
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
blob = a.client.GetContainerReference(bucket).GetBlobReference(metadataObject)
|
_, derr := blobURL.Delete(ctx, azblob.DeleteSnapshotsOptionNone, azblob.BlobAccessConditions{})
|
||||||
derr := blob.Delete(nil)
|
|
||||||
logger.GetReqInfo(ctx).AppendTags("uploadID", uploadID)
|
logger.GetReqInfo(ctx).AppendTags("uploadID", uploadID)
|
||||||
logger.LogIf(ctx, derr)
|
logger.LogIf(ctx, derr)
|
||||||
|
|
||||||
@ -1227,9 +1233,9 @@ func (a *azureObjects) CompleteMultipartUpload(ctx context.Context, bucket, obje
|
|||||||
}
|
}
|
||||||
|
|
||||||
// SetBucketPolicy - Azure supports three types of container policies:
|
// SetBucketPolicy - Azure supports three types of container policies:
|
||||||
// storage.ContainerAccessTypeContainer - readonly in minio terminology
|
// azblob.PublicAccessContainer - readonly in minio terminology
|
||||||
// storage.ContainerAccessTypeBlob - readonly without listing in minio terminology
|
// azblob.PublicAccessBlob - readonly without listing in minio terminology
|
||||||
// storage.ContainerAccessTypePrivate - none in minio terminology
|
// azblob.PublicAccessNone - none in minio terminology
|
||||||
// As the common denominator for minio and azure is readonly and none, we support
|
// As the common denominator for minio and azure is readonly and none, we support
|
||||||
// these two policies at the bucket level.
|
// these two policies at the bucket level.
|
||||||
func (a *azureObjects) SetBucketPolicy(ctx context.Context, bucket string, bucketPolicy *policy.Policy) error {
|
func (a *azureObjects) SetBucketPolicy(ctx context.Context, bucket string, bucketPolicy *policy.Policy) error {
|
||||||
@ -1257,26 +1263,25 @@ func (a *azureObjects) SetBucketPolicy(ctx context.Context, bucket string, bucke
|
|||||||
if policies[0].Policy != miniogopolicy.BucketPolicyReadOnly {
|
if policies[0].Policy != miniogopolicy.BucketPolicyReadOnly {
|
||||||
return minio.NotImplemented{}
|
return minio.NotImplemented{}
|
||||||
}
|
}
|
||||||
perm := storage.ContainerPermissions{
|
perm := azblob.PublicAccessContainer
|
||||||
AccessType: storage.ContainerAccessTypeContainer,
|
container := a.client.NewContainerURL(bucket)
|
||||||
AccessPolicies: nil,
|
_, err = container.SetAccessPolicy(ctx, perm, nil, azblob.ContainerAccessConditions{})
|
||||||
}
|
|
||||||
container := a.client.GetContainerReference(bucket)
|
|
||||||
err = container.SetPermissions(perm, nil)
|
|
||||||
return azureToObjectError(err, bucket)
|
return azureToObjectError(err, bucket)
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetBucketPolicy - Get the container ACL and convert it to canonical []bucketAccessPolicy
|
// GetBucketPolicy - Get the container ACL and convert it to canonical []bucketAccessPolicy
|
||||||
func (a *azureObjects) GetBucketPolicy(ctx context.Context, bucket string) (*policy.Policy, error) {
|
func (a *azureObjects) GetBucketPolicy(ctx context.Context, bucket string) (*policy.Policy, error) {
|
||||||
container := a.client.GetContainerReference(bucket)
|
container := a.client.NewContainerURL(bucket)
|
||||||
perm, err := container.GetPermissions(nil)
|
perm, err := container.GetAccessPolicy(ctx, azblob.LeaseAccessConditions{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, azureToObjectError(err, bucket)
|
return nil, azureToObjectError(err, bucket)
|
||||||
}
|
}
|
||||||
|
|
||||||
if perm.AccessType == storage.ContainerAccessTypePrivate {
|
permAccessType := perm.BlobPublicAccess()
|
||||||
|
|
||||||
|
if permAccessType == azblob.PublicAccessNone {
|
||||||
return nil, minio.BucketPolicyNotFound{Bucket: bucket}
|
return nil, minio.BucketPolicyNotFound{Bucket: bucket}
|
||||||
} else if perm.AccessType != storage.ContainerAccessTypeContainer {
|
} else if permAccessType != azblob.PublicAccessContainer {
|
||||||
return nil, azureToObjectError(minio.NotImplemented{})
|
return nil, azureToObjectError(minio.NotImplemented{})
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1303,12 +1308,9 @@ func (a *azureObjects) GetBucketPolicy(ctx context.Context, bucket string) (*pol
|
|||||||
|
|
||||||
// DeleteBucketPolicy - Set the container ACL to "private"
|
// DeleteBucketPolicy - Set the container ACL to "private"
|
||||||
func (a *azureObjects) DeleteBucketPolicy(ctx context.Context, bucket string) error {
|
func (a *azureObjects) DeleteBucketPolicy(ctx context.Context, bucket string) error {
|
||||||
perm := storage.ContainerPermissions{
|
perm := azblob.PublicAccessNone
|
||||||
AccessType: storage.ContainerAccessTypePrivate,
|
containerURL := a.client.NewContainerURL(bucket)
|
||||||
AccessPolicies: nil,
|
_, err := containerURL.SetAccessPolicy(ctx, perm, nil, azblob.ContainerAccessConditions{})
|
||||||
}
|
|
||||||
container := a.client.GetContainerReference(bucket)
|
|
||||||
err := container.SetPermissions(perm, nil)
|
|
||||||
return azureToObjectError(err)
|
return azureToObjectError(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -18,15 +18,44 @@ package azure
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
|
"encoding/base64"
|
||||||
"fmt"
|
"fmt"
|
||||||
"net/http"
|
"net/http"
|
||||||
"reflect"
|
"reflect"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/Azure/azure-sdk-for-go/storage"
|
"github.com/Azure/azure-storage-blob-go/azblob"
|
||||||
minio "github.com/minio/minio/cmd"
|
minio "github.com/minio/minio/cmd"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
func TestParseStorageEndpoint(t *testing.T) {
|
||||||
|
testCases := []struct {
|
||||||
|
host string
|
||||||
|
accountName string
|
||||||
|
expectedURL string
|
||||||
|
expectedErr error
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
"", "myaccount", "https://myaccount.blob.core.windows.net", nil,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"myaccount.blob.core.usgovcloudapi.net", "myaccount", "https://myaccount.blob.core.usgovcloudapi.net", nil,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"http://localhost:10000", "myaccount", "http://localhost:10000/myaccount", nil,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for i, testCase := range testCases {
|
||||||
|
endpointURL, err := parseStorageEndpoint(testCase.host, testCase.accountName)
|
||||||
|
if err != testCase.expectedErr {
|
||||||
|
t.Errorf("Test %d: Expected error %s, got %s", i+1, testCase.expectedErr, err)
|
||||||
|
}
|
||||||
|
if endpointURL.String() != testCase.expectedURL {
|
||||||
|
t.Errorf("Test %d: Expected URL %s, got %s", i+1, testCase.expectedURL, endpointURL.String())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Test canonical metadata.
|
// Test canonical metadata.
|
||||||
func TestS3MetaToAzureProperties(t *testing.T) {
|
func TestS3MetaToAzureProperties(t *testing.T) {
|
||||||
headers := map[string]string{
|
headers := map[string]string{
|
||||||
@ -79,7 +108,7 @@ func TestS3MetaToAzureProperties(t *testing.T) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("Test failed, with %s", err)
|
t.Fatalf("Test failed, with %s", err)
|
||||||
}
|
}
|
||||||
if props.ContentMD5 != headers["content-md5"] {
|
if base64.StdEncoding.EncodeToString(props.ContentMD5) != headers["content-md5"] {
|
||||||
t.Fatalf("Test failed, expected %s, got %s", headers["content-md5"], props.ContentMD5)
|
t.Fatalf("Test failed, expected %s, got %s", headers["content-md5"], props.ContentMD5)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -110,23 +139,22 @@ func TestAzurePropertiesToS3Meta(t *testing.T) {
|
|||||||
"Content-Disposition": "dummy",
|
"Content-Disposition": "dummy",
|
||||||
"Content-Encoding": "gzip",
|
"Content-Encoding": "gzip",
|
||||||
"Content-Length": "10",
|
"Content-Length": "10",
|
||||||
"Content-MD5": "base64-md5",
|
"Content-MD5": base64.StdEncoding.EncodeToString([]byte("base64-md5")),
|
||||||
"Content-Type": "application/javascript",
|
"Content-Type": "application/javascript",
|
||||||
}
|
}
|
||||||
actualMeta := azurePropertiesToS3Meta(metadata, storage.BlobProperties{
|
actualMeta := azurePropertiesToS3Meta(metadata, azblob.BlobHTTPHeaders{
|
||||||
CacheControl: "max-age: 3600",
|
CacheControl: "max-age: 3600",
|
||||||
ContentDisposition: "dummy",
|
ContentDisposition: "dummy",
|
||||||
ContentEncoding: "gzip",
|
ContentEncoding: "gzip",
|
||||||
ContentLength: 10,
|
ContentMD5: []byte("base64-md5"),
|
||||||
ContentMD5: "base64-md5",
|
|
||||||
ContentType: "application/javascript",
|
ContentType: "application/javascript",
|
||||||
})
|
}, 10)
|
||||||
if !reflect.DeepEqual(actualMeta, expectedMeta) {
|
if !reflect.DeepEqual(actualMeta, expectedMeta) {
|
||||||
t.Fatalf("Test failed, expected %#v, got %#v", expectedMeta, actualMeta)
|
t.Fatalf("Test failed, expected %#v, got %#v", expectedMeta, actualMeta)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add tests for azure to object error.
|
// Add tests for azure to object error (top level).
|
||||||
func TestAzureToObjectError(t *testing.T) {
|
func TestAzureToObjectError(t *testing.T) {
|
||||||
testCases := []struct {
|
testCases := []struct {
|
||||||
actualErr error
|
actualErr error
|
||||||
@ -140,50 +168,74 @@ func TestAzureToObjectError(t *testing.T) {
|
|||||||
fmt.Errorf("Non azure error"),
|
fmt.Errorf("Non azure error"),
|
||||||
fmt.Errorf("Non azure error"), "", "",
|
fmt.Errorf("Non azure error"), "", "",
|
||||||
},
|
},
|
||||||
{
|
|
||||||
storage.AzureStorageServiceError{
|
|
||||||
Code: "ContainerAlreadyExists",
|
|
||||||
}, minio.BucketExists{Bucket: "bucket"}, "bucket", "",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
storage.AzureStorageServiceError{
|
|
||||||
Code: "InvalidResourceName",
|
|
||||||
}, minio.BucketNameInvalid{Bucket: "bucket."}, "bucket.", "",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
storage.AzureStorageServiceError{
|
|
||||||
Code: "RequestBodyTooLarge",
|
|
||||||
}, minio.PartTooBig{}, "", "",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
storage.AzureStorageServiceError{
|
|
||||||
Code: "InvalidMetadata",
|
|
||||||
}, minio.UnsupportedMetadata{}, "", "",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
storage.AzureStorageServiceError{
|
|
||||||
StatusCode: http.StatusNotFound,
|
|
||||||
}, minio.ObjectNotFound{
|
|
||||||
Bucket: "bucket",
|
|
||||||
Object: "object",
|
|
||||||
}, "bucket", "object",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
storage.AzureStorageServiceError{
|
|
||||||
StatusCode: http.StatusNotFound,
|
|
||||||
}, minio.BucketNotFound{Bucket: "bucket"}, "bucket", "",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
storage.AzureStorageServiceError{
|
|
||||||
StatusCode: http.StatusBadRequest,
|
|
||||||
}, minio.BucketNameInvalid{Bucket: "bucket."}, "bucket.", "",
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
for i, testCase := range testCases {
|
for i, testCase := range testCases {
|
||||||
if err := azureToObjectError(testCase.actualErr, testCase.bucket, testCase.object); err != nil {
|
if err := azureToObjectError(testCase.actualErr, testCase.bucket, testCase.object); err != nil {
|
||||||
if err.Error() != testCase.expectedErr.Error() {
|
if err.Error() != testCase.expectedErr.Error() {
|
||||||
t.Errorf("Test %d: Expected error %s, got %s", i+1, testCase.expectedErr, err)
|
t.Errorf("Test %d: Expected error %s, got %s", i+1, testCase.expectedErr, err)
|
||||||
}
|
}
|
||||||
|
} else {
|
||||||
|
if testCase.expectedErr != nil {
|
||||||
|
t.Errorf("Test %d expected an error but one was not produced", i+1)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add tests for azure to object error (internal).
|
||||||
|
func TestAzureCodesToObjectError(t *testing.T) {
|
||||||
|
testCases := []struct {
|
||||||
|
originalErr error
|
||||||
|
actualServiceCode string
|
||||||
|
actualStatusCode int
|
||||||
|
expectedErr error
|
||||||
|
bucket, object string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
nil, "ContainerAlreadyExists", 0,
|
||||||
|
minio.BucketExists{Bucket: "bucket"}, "bucket", "",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
nil, "InvalidResourceName", 0,
|
||||||
|
minio.BucketNameInvalid{Bucket: "bucket."}, "bucket.", "",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
nil, "RequestBodyTooLarge", 0,
|
||||||
|
minio.PartTooBig{}, "", "",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
nil, "InvalidMetadata", 0,
|
||||||
|
minio.UnsupportedMetadata{}, "", "",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
nil, "", http.StatusNotFound,
|
||||||
|
minio.ObjectNotFound{
|
||||||
|
Bucket: "bucket",
|
||||||
|
Object: "object",
|
||||||
|
}, "bucket", "object",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
nil, "", http.StatusNotFound,
|
||||||
|
minio.BucketNotFound{Bucket: "bucket"}, "bucket", "",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
nil, "", http.StatusBadRequest,
|
||||||
|
minio.BucketNameInvalid{Bucket: "bucket."}, "bucket.", "",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
fmt.Errorf("unhandled azure error"), "", http.StatusForbidden,
|
||||||
|
fmt.Errorf("unhandled azure error"), "", "",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for i, testCase := range testCases {
|
||||||
|
if err := azureCodesToObjectError(testCase.originalErr, testCase.actualServiceCode, testCase.actualStatusCode, testCase.bucket, testCase.object); err != nil {
|
||||||
|
if err.Error() != testCase.expectedErr.Error() {
|
||||||
|
t.Errorf("Test %d: Expected error %s, got %s", i+1, testCase.expectedErr, err)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if testCase.expectedErr != nil {
|
||||||
|
t.Errorf("Test %d expected an error but one was not produced", i+1)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
5
go.mod
5
go.mod
@ -4,8 +4,8 @@ go 1.13
|
|||||||
|
|
||||||
require (
|
require (
|
||||||
cloud.google.com/go v0.37.2
|
cloud.google.com/go v0.37.2
|
||||||
github.com/Azure/azure-sdk-for-go v33.4.0+incompatible
|
github.com/Azure/azure-pipeline-go v0.2.1
|
||||||
github.com/Azure/go-autorest v11.7.0+incompatible
|
github.com/Azure/azure-storage-blob-go v0.8.0
|
||||||
github.com/StackExchange/wmi v0.0.0-20190523213315-cbe66965904d // indirect
|
github.com/StackExchange/wmi v0.0.0-20190523213315-cbe66965904d // indirect
|
||||||
github.com/alecthomas/participle v0.2.1
|
github.com/alecthomas/participle v0.2.1
|
||||||
github.com/aliyun/aliyun-oss-go-sdk v0.0.0-20190307165228-86c17b95fcd5
|
github.com/aliyun/aliyun-oss-go-sdk v0.0.0-20190307165228-86c17b95fcd5
|
||||||
@ -37,6 +37,7 @@ require (
|
|||||||
github.com/klauspost/reedsolomon v1.9.3
|
github.com/klauspost/reedsolomon v1.9.3
|
||||||
github.com/kurin/blazer v0.5.4-0.20190613185654-cf2f27cc0be3
|
github.com/kurin/blazer v0.5.4-0.20190613185654-cf2f27cc0be3
|
||||||
github.com/lib/pq v1.0.0
|
github.com/lib/pq v1.0.0
|
||||||
|
github.com/mattn/go-ieproxy v0.0.0-20190805055040-f9202b1cfdeb // indirect; Pinned for FreeBSD support.
|
||||||
github.com/miekg/dns v1.1.8
|
github.com/miekg/dns v1.1.8
|
||||||
github.com/minio/cli v1.22.0
|
github.com/minio/cli v1.22.0
|
||||||
github.com/minio/gokrb5/v7 v7.2.5
|
github.com/minio/gokrb5/v7 v7.2.5
|
||||||
|
7
go.sum
7
go.sum
@ -12,11 +12,15 @@ contrib.go.opencensus.io/exporter/stackdriver v0.0.0-20180919222851-d1e19f5c23e9
|
|||||||
git.apache.org/thrift.git v0.0.0-20180902110319-2566ecd5d999/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg=
|
git.apache.org/thrift.git v0.0.0-20180902110319-2566ecd5d999/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg=
|
||||||
git.apache.org/thrift.git v0.12.0 h1:CMxsZlAmxKs+VAZMlDDL0wXciMblJcutQbEe3A9CYUM=
|
git.apache.org/thrift.git v0.12.0 h1:CMxsZlAmxKs+VAZMlDDL0wXciMblJcutQbEe3A9CYUM=
|
||||||
git.apache.org/thrift.git v0.12.0/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg=
|
git.apache.org/thrift.git v0.12.0/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg=
|
||||||
|
github.com/Azure/azure-pipeline-go v0.2.1 h1:OLBdZJ3yvOn2MezlWvbrBMTEUQC72zAftRZOMdj5HYo=
|
||||||
|
github.com/Azure/azure-pipeline-go v0.2.1/go.mod h1:UGSo8XybXnIGZ3epmeBw7Jdz+HiUVpqIlpz/HKHylF4=
|
||||||
github.com/Azure/azure-sdk-for-go v26.4.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
github.com/Azure/azure-sdk-for-go v26.4.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
||||||
github.com/Azure/azure-sdk-for-go v27.0.0+incompatible h1:JknnG+RYTnwzpi+YuQ04/dAWIssbubSRD8arN78I+Qo=
|
github.com/Azure/azure-sdk-for-go v27.0.0+incompatible h1:JknnG+RYTnwzpi+YuQ04/dAWIssbubSRD8arN78I+Qo=
|
||||||
github.com/Azure/azure-sdk-for-go v27.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
github.com/Azure/azure-sdk-for-go v27.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
||||||
github.com/Azure/azure-sdk-for-go v33.4.0+incompatible h1:yzJKzcKTX0WwDdZC8kAqxiGVZz66uqpajhgphstEUN0=
|
github.com/Azure/azure-sdk-for-go v33.4.0+incompatible h1:yzJKzcKTX0WwDdZC8kAqxiGVZz66uqpajhgphstEUN0=
|
||||||
github.com/Azure/azure-sdk-for-go v33.4.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
github.com/Azure/azure-sdk-for-go v33.4.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
||||||
|
github.com/Azure/azure-storage-blob-go v0.8.0 h1:53qhf0Oxa0nOjgbDeeYPUeyiNmafAFEY95rZLK0Tj6o=
|
||||||
|
github.com/Azure/azure-storage-blob-go v0.8.0/go.mod h1:lPI3aLPpuLTeUwh1sViKXFxwl2B6teiRqI0deQUvsw0=
|
||||||
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
|
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
|
||||||
github.com/Azure/go-autorest v11.5.2+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
|
github.com/Azure/go-autorest v11.5.2+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
|
||||||
github.com/Azure/go-autorest v11.7.0+incompatible h1:gzma19dc9ejB75D90E5S+/wXouzpZyA+CV+/MJPSD/k=
|
github.com/Azure/go-autorest v11.7.0+incompatible h1:gzma19dc9ejB75D90E5S+/wXouzpZyA+CV+/MJPSD/k=
|
||||||
@ -396,6 +400,9 @@ github.com/mattn/go-colorable v0.0.0-20160210001857-9fdad7c47650/go.mod h1:9vuHe
|
|||||||
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
|
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
|
||||||
github.com/mattn/go-colorable v0.1.1 h1:G1f5SKeVxmagw/IyvzvtZE4Gybcc4Tr1tf7I8z0XgOg=
|
github.com/mattn/go-colorable v0.1.1 h1:G1f5SKeVxmagw/IyvzvtZE4Gybcc4Tr1tf7I8z0XgOg=
|
||||||
github.com/mattn/go-colorable v0.1.1/go.mod h1:FuOcm+DKB9mbwrcAfNl7/TZVBZ6rcnceauSikq3lYCQ=
|
github.com/mattn/go-colorable v0.1.1/go.mod h1:FuOcm+DKB9mbwrcAfNl7/TZVBZ6rcnceauSikq3lYCQ=
|
||||||
|
github.com/mattn/go-ieproxy v0.0.0-20190610004146-91bb50d98149/go.mod h1:31jz6HNzdxOmlERGGEc4v/dMssOfmp2p5bT/okiKFFc=
|
||||||
|
github.com/mattn/go-ieproxy v0.0.0-20190805055040-f9202b1cfdeb h1:hXqqXzQtJbENrsb+rsIqkVqcg4FUJL0SQFGw08Dgivw=
|
||||||
|
github.com/mattn/go-ieproxy v0.0.0-20190805055040-f9202b1cfdeb/go.mod h1:31jz6HNzdxOmlERGGEc4v/dMssOfmp2p5bT/okiKFFc=
|
||||||
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
|
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
|
||||||
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
|
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
|
||||||
github.com/mattn/go-isatty v0.0.5/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
|
github.com/mattn/go-isatty v0.0.5/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
|
||||||
|
Loading…
x
Reference in New Issue
Block a user